SYSTEM OPERATIONAL // BACKEND & AI ENGINEER

SCALABLE SYSTEMS
THAT PERFORM

Architecting autonomous systems and scalable infrastructure that hunt latency and capture value.

WORK EXPERIENCE → VIEW PROJECTS CURRENTLY BUILDING

// Active Projects

NEXTWORK RAG API

Production-ready Document Q&A API using FastAPI, ChromaDB, and Ollama. Orchestrated with Kubernetes.

PYTHON K8S OLLAMA
VIEW PROJECT

SHIPMENT MICROSERVICE

High-performance Async shipment operations with FastAPI, Redis, and PostgreSQL. Live Swagger Docs available.

REDIS ASYNC POSTGRES
VIEW PROJECT

SERVERLESS AI CHAT

Real-time streaming chat platform using Azure Functions and Google Gemini 2.0 for content moderation.

AZURE GEMINI NOSQL
VIEW PROJECT

ULTRASOUND AI CLASSIFIER

CNN model for classifying medical images using Transfer Learning (EfficientNet). Hosted on HuggingFace.

TENSORFLOW FASTAPI CV
VIEW PROJECT

Currently Building

IN DEVELOPMENT

ORPHEUS

Ingests songs from Spotify & YouTube, extracts audio features using deep learning (VibeNet + librosa), stores them as vectors in Qdrant, and serves semantic playlist search with Shazam-style audio recognition.

Orpheus API
Heroku — FastAPI handling search, ingestion, YouTube downloads, and Qdrant storage
Orpheus Extractor
HuggingFace Space (16 GB) — Heavy ML inference with VibeNet + librosa
Qdrant Cloud
Vector database storing 30-dim song embeddings and 32-dim frame embeddings
Spotify API
Song discovery and metadata enrichment across genres
YouTube / yt-dlp
Audio download pipeline with cookie auth and Deno cipher solving
FastAPI Qdrant librosa VibeNet yt-dlp Spotify API APScheduler Docker Heroku HuggingFace

System Flow Diagram

Orpheus — Component Diagram
Client
API
ML
Storage
External
👤 Client
curl / voice app
Scalar UI
POST /search
POST /match
⚡ Orpheus API
FastAPI — Heroku
Routes · Scheduler · Pipeline
query / upsert
🗃 Qdrant Cloud
orpheus_songs (30d)
orpheus_frames (32d)
mp3 upload
🎵 Spotify API
Track discovery
Metadata + genres
track IDs
🧠 HF Extractor
VibeNet (ONNX) + librosa
16 GB · HuggingFace Space
vectors JSON
▶ YouTube
yt-dlp + Deno
Audio download
Waiting to animate...
POST /search Semantic Playlist
Client sends mood query — JSON body with energy, valence, danceability, tempo_min, tempo_max, mode, limit
API builds 30-dim query vector — Omitted fields default to 0.5. Tempo & mode become Qdrant payload filters
Qdrant cosine similarity search on orpheus_songs collection — returns top-K nearest neighbors
API enriches results — Attaches metadata (title, artist, album, genres, cover art) and computes similarity score
Returns playlist JSON — Ranked array of PlaylistSong objects with mood scores, tempo, key, and streaming links
POST /match/snippet Shazam-style Match
1
Client uploads audio filemultipart/form-data with mp3, wav, m4a, ogg, or flac. Optional ?similar_limit=5
2
API forwards to HF Extractor — Extracts 32-dim frame embeddings from 5s windows with 2.5s hop across the clip
3
Frame-level Qdrant search on orpheus_frames — Each frame vector queries for nearest matches, votes aggregated per song
4
Best match identified — Song with highest vote count wins. Returns confidence score + heard_at timestamp range
5
Similar songs returned — Uses matched song’s 30-dim embedding to find similar_limit related tracks from orpheus_songs
INGESTION PIPELINE — LIVE SIMULATION
🎧
Discover
Spotify search across genres
Download
yt-dlp audio extraction
🧠
Extract
VibeNet + librosa features
📈
Vectorize
30-dim + 32-dim embeddings
🗃
Store
Upsert to Qdrant Cloud

📖 Interactive API Documentation

Explore the full OpenAPI spec with live request testing. The Scalar UI provides an interactive playground — try any endpoint without writing code.

API Endpoints https://orpheus-api-0bc4904911a6.herokuapp.com
GET /health Liveness check
GET /stats Qdrant collection statistics (song count, frame count)
GET /scheduler/status Next scheduled run, song source, config
POST /scheduler/trigger Manually fire an ingestion cycle
POST /ingest/song Ingest a specific YouTube video by ID
POST /search Semantic song search from structured JSON query
POST /match/snippet Upload an audio clip for Shazam-style matching
GET /scalar Interactive API docs (Scalar UI)

Quick Start Tutorial

Health Check — Verify the API is alive

STEP 0
▶ REQUEST
curl https://orpheus-api-0bc4904911a6.herokuapp.com/health
◀ RESPONSE
{
  "status": "ok"
}
💡 Returns {"status": "ok"} when the service is up. Use this to verify connectivity before making search requests.

Semantic Search — Find songs by vibe

STEP 1
▶ REQUEST
curl -X POST https://orpheus-api-0bc4904911a6.herokuapp.com/search \
  -H "Content-Type: application/json" \
  -d '{
    "energy": 0.8,
    "valence": 0.6,
    "danceability": 0.7,
    "tempo_min": 120,
    "tempo_max": 150,
    "mode": "major",
    "limit": 5
  }'
◀ RESPONSE
{
  "count": 2,
  "playlist": [
    {
      "rank": 1,
      "score": 0.9288,
      "title": "Calm Down",
      "artist": "Rema",
      "album": "Rave & Roses",
      "genres": ["Afrobeats"],
      "tempo_bpm": 106.99,
      "key": "C# minor",
      "mood": {
        "energy": 0.62,
        "valence": 0.74,
        "danceability": 0.80
      },
      "links": {
        "youtube": "https://youtube.com/watch?v=...",
        "spotify": "https://open.spotify.com/track/..."
      }
    }
  ]
}
💡 All vibe fields (energy, valence, danceability, acousticness, instrumentalness, speechiness, liveness) are floats in [0, 1]. Omit any field to default to 0.5. tempo_min/tempo_max are BPM hard filters. mode accepts "major" or "minor".

Match Snippet — Shazam-style recognition

STEP 2
▶ REQUEST
curl -X POST \
  "https://orpheus-api-0bc4904911a6.herokuapp.com/match/snippet?similar_limit=5" \
  -F "file=@recording.mp3"
◀ RESPONSE
{
  "match": {
    "title": "Calm Down",
    "artist": "Rema",
    "youtube_id": "WcIcVapfqXw",
    "heard_at": {
      "start_s": 45.0,
      "end_s": 50.0
    },
    "confidence": 0.94,
    "links": {
      "youtube": "https://youtube.com/watch?v=WcIcVapfqXw",
      "spotify": "https://open.spotify.com/track/..."
    }
  },
  "similar_songs": [
    {
      "title": "Essence",
      "artist": "Wizkid",
      "score": 0.87,
      "links": { "youtube": "..." }
    }
  ]
}
💡 Upload any audio clip (mp3, wav, m4a, ogg, flac — anything ffmpeg can decode). The API extracts 32-dim frame embeddings from 5-second windows with 2.5s hop and matches against the orpheus_frames collection in Qdrant. The similar_limit query param controls how many similar songs to return (default: 5).

Library Stats — Check collection health

STEP 3
▶ REQUEST
curl https://orpheus-api-0bc4904911a6.herokuapp.com/stats
◀ RESPONSE
{
  "total_songs": 142,
  "total_frames": 9840,
  "status": "green",
  "vectors_count": 142,
  "indexed_vectors_count": 142
}
💡 Returns Qdrant collection statistics including total songs indexed, frame vectors stored, and index health status. Use this to monitor library growth after ingestion cycles.

Ingest Song — Add a YouTube video

STEP 4
▶ REQUEST
curl -X POST https://orpheus-api-0bc4904911a6.herokuapp.com/ingest/song \
  -H "Content-Type: application/json" \
  -d '{"youtube_id": "dQw4w9WgXcQ"}'
◀ RESPONSE
{
  "message": "Ingestion queued.",
  "youtube_id": "dQw4w9WgXcQ",
  "title": "Rick Astley - Never Gonna Give You Up",
  "artist": "Rick Astley"
}
💡 Pass an 11-character YouTube video ID. Orpheus will download the audio via yt-dlp, extract features through the HuggingFace Space extractor (VibeNet + librosa), and store the 30-dim song embedding + 32-dim frame embeddings in Qdrant. Skips if already in the library.

Python Client — Search example

STEP 5
▶ REQUEST
import httpx

BASE = "https://orpheus-api-0bc4904911a6.herokuapp.com"

# Semantic search — find chill acoustic tracks
resp = httpx.post(f"{BASE}/search", json={
    "energy": 0.3,
    "acousticness": 0.9,
    "valence": 0.7,
    "limit": 5
})

for song in resp.json()["playlist"]:
    print(f"{song['rank']}. {song['title']} "
          f"by {song['artist']} "
          f"(score: {song['score']:.2f})")
◀ RESPONSE
1. Essence by Wizkid (score: 0.91)
2. Calm Down by Rema (score: 0.88)
3. Love Nwantiti by CKay (score: 0.85)
💡 Use httpx (async-capable HTTP client) for Python integration. All mood fields are optional — omit any to default to neutral (0.5). The response includes full metadata: mood scores, instrument profile, spectral info, and streaming links.

{} Search Response Schema

PlaylistSong Fields
Field Type Description
rank integer 1-based position in result list
score float Cosine similarity (0–1). Higher = better match
title string Song title from metadata
artist string Primary artist name
mood object VibeNet scores: energy, valence, danceability, acousticness, instrumentalness, speechiness, liveness
tempo_bpm float Detected tempo in BPM
key string Musical key, e.g. 'C# minor'
links object YouTube and Spotify URLs
instrument_profile object Harmonic ratio, brightness, tonal strength, estimated instrument
cover_url string Album art URL from Spotify

2+ LEVEL UP

YEARS EXP

99.9% STABLE

UPTIME

5 +3

ACTIVE DEPLOYS

K8s NATIVE

ORCHESTRATION