The Art Register API Documentation

Comprehensive documentation for the intelligent art tour planning API

View the Project on GitHub collekton/the-art-register

AI Server Hardware Specifications - TAR Gallery Scraper

🎯 Project Overview

Purpose: Rack-mounted Linux server to run GPT-OSS-20B for unlimited AI-powered gallery scraping
Target: 8000+ gallery websites with intelligent analysis and marketing opportunity creation
Architecture: Local AI processing with DynDNS external access
Budget: $3,000 - $5,000 (one-time investment)


🖥️ Hardware Specifications

Primary Configuration: GPT-OSS-20B Server

GPU (Critical Component)

# Primary GPU: NVIDIA RTX 4090
- VRAM: 24GB GDDR6X
- CUDA Cores: 16,384
- Memory Bandwidth: 1,008 GB/s
- Power: 450W TDP
- Cost: $1,600

# Alternative: NVIDIA RTX 4080 Super (if 4090 unavailable)
- VRAM: 16GB GDDR6X
- CUDA Cores: 10,240
- Cost: $1,200

CPU

# AMD Ryzen 9 7900X
- Cores: 12 cores / 24 threads
- Base Clock: 4.7 GHz
- Boost Clock: 5.6 GHz
- Cache: 76MB total
- TDP: 170W
- Socket: AM5
- Cost: $400

# Alternative: Intel Core i9-13900K
- Cores: 24 cores / 32 threads (8P + 16E)
- Base Clock: 3.0 GHz (P-core)
- Boost Clock: 5.8 GHz (P-core)
- Cost: $450

Motherboard

# ASUS ROG STRIX X670E-E GAMING WIFI
- Socket: AM5
- Chipset: AMD X670E
- Memory: DDR5, up to 128GB
- PCIe: 5.0 x16 for GPU
- Networking: 2.5GbE + WiFi 6E
- USB: 10Gbps ports
- Cost: $350

# Alternative: MSI MPG X670E CARBON WIFI
- Similar specs, slightly cheaper
- Cost: $320

Memory (RAM)

# 64GB DDR5-6000 (2x32GB)
- Speed: 6000 MHz
- Latency: CL30
- ECC: Non-ECC (sufficient for AI workloads)
- Brand: G.SKILL Trident Z5 or Corsair Dominator
- Cost: $250

# Alternative: 128GB DDR5-5600 (4x32GB)
- For future-proofing or larger models
- Cost: $450

Storage

# Primary: Samsung 990 PRO 2TB NVMe SSD
- Interface: PCIe 4.0 x4
- Read Speed: 7,450 MB/s
- Write Speed: 6,900 MB/s
- Endurance: 1,200 TBW
- Cost: $150

# Secondary: Samsung 870 EVO 4TB SATA SSD
- For model storage and backups
- Cost: $300

Power Supply

# Seasonic PRIME TX-1000W
- Wattage: 1000W
- Efficiency: 80+ Titanium
- Modular: Fully modular
- Warranty: 12 years
- Cost: $250

# Alternative: Corsair HX1000i
- Similar specs, slightly cheaper
- Cost: $220

Case

# Rosewill RSV-L4500U 4U Rack Mount
- Form Factor: 4U rack mount
- Drive Bays: 8x 3.5" hot-swappable
- Expansion Slots: 7 full-height
- Cooling: 3x 120mm fans included
- Cost: $200

# Alternative: iStarUSA D-400 4U
- Similar specs, different brand
- Cost: $180

Cooling

# CPU Cooler: Noctua NH-U12A
- Type: Air cooler
- Height: 158mm (fits in 4U case)
- Noise: 22.4 dB(A)
- Cost: $100

# Case Fans: 3x Noctua NF-A12x25 PWM
- Size: 120mm
- Airflow: 102.1 m³/h
- Noise: 22.4 dB(A)
- Cost: $90 (3x $30)

Network Card

# Intel X550-T2 10GbE Dual Port
- Speed: 10 Gigabit Ethernet
- Ports: 2x RJ45
- PCIe: 3.0 x4
- Cost: $200

# Alternative: Mellanox ConnectX-3 (used)
- 10GbE SFP+ ports
- Cost: $50-100

💰 Cost Breakdown

Primary Configuration (GPT-OSS-20B)

Component          | Cost    | Notes
-------------------|---------|------------------
GPU (RTX 4090)     | $1,600  | 24GB VRAM
CPU (Ryzen 9 7900X)| $400    | 12 cores
Motherboard        | $350    | X670E
RAM (64GB DDR5)    | $250    | 6000 MHz
Storage (2TB NVMe) | $150    | Primary
Storage (4TB SATA) | $300    | Secondary
PSU (1000W)        | $250    | Titanium
Case (4U Rack)     | $200    | Rosewill
Cooling            | $190    | CPU + case fans
Network Card       | $200    | 10GbE
-------------------|---------|------------------
TOTAL              | $3,890  | + shipping/tax

Upgraded Configuration (Future-Proof)

Component          | Cost    | Notes
-------------------|---------|------------------
GPU (RTX 4090)     | $1,600  | 24GB VRAM
CPU (Ryzen 9 7950X)| $600    | 16 cores
Motherboard        | $350    | X670E
RAM (128GB DDR5)   | $450    | 5600 MHz
Storage (2TB NVMe) | $150    | Primary
Storage (8TB SATA) | $500    | Secondary
PSU (1200W)        | $300    | Platinum
Case (4U Rack)     | $200    | Rosewill
Cooling            | $190    | CPU + case fans
Network Card       | $200    | 10GbE
-------------------|---------|------------------
TOTAL              | $4,540  | + shipping/tax

🔧 Software Stack

Operating System

# Ubuntu Server 22.04 LTS
- Long-term support until 2027
- Excellent GPU driver support
- Stable for production use
- Free download

AI Framework Stack

# Core AI Software:
- Python 3.10+
- CUDA 12.0+
- PyTorch 2.0+
- Transformers 4.30+
- vLLM 0.2.0+
- Ollama (alternative)

# Web Server:
- FastAPI (Python)
- Nginx (reverse proxy)
- Gunicorn (WSGI server)

# Monitoring:
- Prometheus + Grafana
- System monitoring
- GPU monitoring

Network Services

# DynDNS Client:
- ddclient (for No-IP, DuckDNS, etc.)
- Automatic IP updates

# Firewall:
- UFW (Uncomplicated Firewall)
- Port forwarding configuration

# VPN (Optional):
- OpenVPN
- WireGuard

🌐 Network Architecture

Internal Network

# Server Configuration:
- IP: 192.168.1.100 (static)
- Hostname: tar-ai-server
- Services:
  * SSH: Port 22
  * AI API: Port 8000
  * Ollama: Port 11434
  * Monitoring: Port 3000 (Grafana)

External Access

# DynDNS Setup:
- Service: No-IP or DuckDNS
- Hostname: your-tar-server.ddns.net
- Auto-update: Every 5 minutes

# Port Forwarding (Router):
- External 8000 → Internal 192.168.1.100:8000
- External 22 → Internal 192.168.1.100:22 (SSH)

Security Configuration

# Firewall Rules:
sudo ufw allow 22/tcp    # SSH
sudo ufw allow 8000/tcp  # AI API
sudo ufw allow 11434/tcp # Ollama
sudo ufw allow 3000/tcp  # Monitoring
sudo ufw enable

# API Security:
- API key authentication
- Rate limiting
- Request validation
- SSL/TLS encryption

🚀 Performance Expectations

GPT-OSS-20B Performance

# Inference Speed:
- 10-20 tokens/second (RTX 4090)
- 5-10 tokens/second (RTX 4080)

# Memory Usage:
- Model: ~16GB VRAM
- System: ~8GB RAM
- Available: 8GB VRAM + 56GB RAM

# Throughput:
- 1 gallery analysis: ~30-60 seconds
- 8000 galleries: ~67-133 hours
- Daily capacity: ~1,800-3,600 galleries

Scaling Considerations

# Single GPU Limit:
- ~3,600 galleries per day
- ~1,200 galleries per day (conservative)

# Multi-GPU Options:
- Add second RTX 4090: +100% capacity
- Upgrade to RTX 6000 Ada: +50% capacity
- Cloud burst: Hybrid approach

📊 Power & Cooling Requirements

Power Consumption

# Peak Power:
- GPU: 450W (RTX 4090)
- CPU: 170W (Ryzen 9 7900X)
- System: 100W
- Total: ~720W peak

# Average Power:
- Idle: ~100W
- Light load: ~300W
- Full load: ~600W

# Monthly Cost (24/7 operation):
- Average: 400W × 24h × 30d = 288 kWh
- Cost: ~$50-100/month (depending on rates)

Cooling Requirements

# Heat Output:
- GPU: ~450W thermal
- CPU: ~170W thermal
- Total: ~620W heat

# Cooling Solution:
- 3x 120mm case fans
- CPU air cooler
- GPU blower style (recommended for rack)
- Room temperature: 20-25°C
- Humidity: 40-60%

🔄 Deployment Phases

Phase 1: Basic Setup

# Week 1-2:
1. Hardware assembly
2. Ubuntu Server installation
3. GPU driver setup
4. Basic network configuration
5. AI framework installation

Phase 2: AI Model Deployment

# Week 3-4:
1. Download GPT-OSS-20B
2. Test local inference
3. API server setup
4. Performance testing
5. Security configuration

Phase 3: TAR Integration

# Week 5-6:
1. Update PHP scraper code
2. Test with local AI server
3. DynDNS configuration
4. External access testing
5. Monitoring setup

Phase 4: Production

# Week 7-8:
1. Full gallery scraping test
2. Performance optimization
3. Backup configuration
4. Documentation
5. Go live

🛠️ Maintenance & Monitoring

System Monitoring

# Hardware Monitoring:
- GPU temperature and usage
- CPU temperature and usage
- Memory usage
- Storage health
- Network performance

# AI Model Monitoring:
- Inference speed
- Memory usage
- Error rates
- Response quality
- Cost tracking (vs cloud)

Backup Strategy

# Data Backup:
- Model weights: External drive
- Configuration: Git repository
- Logs: Rotating backup
- Database: Existing TAR backup

# Disaster Recovery:
- System image backup
- Configuration documentation
- Spare parts inventory
- Cloud fallback option

🎯 Success Metrics

Performance Targets

# Speed:
- Gallery analysis: <60 seconds
- 8000 galleries: <48 hours
- Daily throughput: >1,000 galleries

# Quality:
- Extraction accuracy: >90%
- False positives: <5%
- Marketing opportunities: >80% relevance

# Cost:
- Hardware ROI: <12 months
- Monthly savings: >$300 vs cloud
- Total cost: <$5,000

Reliability Targets

# Uptime:
- System availability: >99%
- Planned maintenance: <4 hours/month
- Unplanned downtime: <2 hours/month

# Data Quality:
- Successful extractions: >95%
- Data accuracy: >90%
- Processing errors: <2%

📋 Procurement Checklist

Hardware Components

Software & Services

Network & Security


🚨 Risk Mitigation

Hardware Risks

# GPU Failure:
- Mitigation: Spare GPU or cloud fallback
- Impact: High (system unusable)
- Probability: Low (2-3% annually)

# Power Issues:
- Mitigation: UPS backup
- Impact: Medium (data loss risk)
- Probability: Low (1-2% annually)

# Cooling Issues:
- Mitigation: Temperature monitoring
- Impact: High (hardware damage)
- Probability: Low (with proper setup)

Software Risks

# Model Updates:
- Mitigation: Version control, testing
- Impact: Low (performance changes)
- Probability: Medium (quarterly)

# Security Vulnerabilities:
- Mitigation: Regular updates, monitoring
- Impact: High (data breach)
- Probability: Low (with proper security)

📞 Support & Resources

Vendor Support

# Hardware Warranty:
- GPU: 3 years
- CPU: 3 years
- Motherboard: 3 years
- PSU: 12 years
- Storage: 5 years

# Software Support:
- Ubuntu: Community support
- AI frameworks: Documentation + community
- DynDNS: Service provider support

Community Resources

# Forums & Documentation:
- NVIDIA Developer Forums
- Hugging Face Community
- Ubuntu Server Documentation
- vLLM GitHub Issues
- Ollama Discord

Last Updated: January 27, 2025
Status: Ready for Procurement
Next Step: Hardware ordering and assembly
Estimated Timeline: 6-8 weeks to production deployment