import json import uuid import subprocess import os import time import ffmpeg import requests import threading import subprocess import numpy as np import sounddevice as sd import threading import queue import sys import io import fcntl from dataclasses import dataclass import numpy as np from collections import deque from jelly import server, client from urllib.parse import urlencode, urljoin import requests from pathlib import Path import mimetypes @dataclass class Song: id: str url: str name: str duration: float album_name: str album_cover_path: str artist_name: str def song_data_to_Song(data, client_data) -> Song: # """ # Build a Jellyfin audio stream URL using urllib.parse. # """ item_id = data["Id"] path = f"/Audio/{item_id}/universal" params = { "UserId": client_data["UserId"], "Container": "flac", "AudioCodec": "flac", # <-- IMPORTANT "api_key": client_data["AccessToken"], } query = urlencode(params) url = urljoin(client_data["address"], path) + "?" + query album_cover_url = urljoin( client_data["address"], f"/Items/{data['AlbumId']}/Images/Primary" ) # r = requests.get(album_cover_url) # r.raise_for_status() # content_type = r.headers.get("Content-Type") # e.g. "image/jpeg" # ext = mimetypes.guess_extension(content_type) # ".jpg" ext = None if ext is None: ext = ".jpg" # safe fallback for album art saved_path = Path("data", "images", data["AlbumId"] + ext).as_posix() with open(saved_path, "wb") as f: f.write(r.content) return Song( item_id, url, data["Name"], data["RunTimeTicks"] / 10_000_000, data["Album"], saved_path, data["AlbumArtist"], ) #os.makedirs("logs", exist_ok=True) os.makedirs("data", exist_ok=True) os.makedirs("data/images", exist_ok=True) class GaplessPlayer: def __init__(self, samplerate: int = 96000, channels: int = 2): self.samplerate = samplerate self.channels = channels self.closed = False self.playing = False self.position = 0.0 self.song_list: list[Song] = [] self.current_song_in_list = -1 self.lock = threading.Lock() self.stream = sd.RawOutputStream( samplerate=self.samplerate, channels=self.channels, dtype="int16", callback=self._callback, ) self.stream.start() self.oscilloscope_data_points = deque(maxlen=samplerate//60) def get_current_song(self): if self.current_song_in_list >= 0 and self.current_song_in_list < len( self.song_list ): return self.song_list[self.current_song_in_list] def _open_ffmpeg(self, song, seek=0): proc = subprocess.Popen( [ "ffmpeg", # "-re", "-ss", str(seek), "-i", song.url, "-f", "s16le", "-ac", str(self.channels), "-ar", str(self.samplerate), "-loglevel", "verbose", "-", ], stdout=subprocess.PIPE, stderr=subprocess.PIPE, ) print("yo") # --- make stdout non-blocking --- fd = proc.stdout.fileno() flags = fcntl.fcntl(fd, fcntl.F_GETFL) fcntl.fcntl(fd, fcntl.F_SETFL, flags | os.O_NONBLOCK) return proc def seek(self, pos): with self.lock: song = self.get_current_song() if song: pos = min(max(0, pos), song.duration) if song.ffmpeg: song.ffmpeg.kill() song.ffmpeg = None if self.playing: song.ffmpeg = self._open_ffmpeg(song, pos) self.position = pos def close(self): self.closed = True self.stream.close() def add_to_queue(self, song: Song): song.ffmpeg = None song.preload_state = 0 self.song_list.append(song) def play(self): with self.lock: if not self.playing: current_song = self.get_current_song() if current_song and not current_song.ffmpeg: current_song.ffmpeg = self._open_ffmpeg(current_song, self.position) self.playing = True def pause(self): with self.lock: # current_song = self.get_current_song() # if current_song and current_song.ffmpeg: # current_song.ffmpeg.kill() # current_song.ffmpeg = None self.playing = False def _start_next(self): # Kill old pipeline current_song = self.get_current_song() if current_song and current_song.ffmpeg: current_song.ffmpeg.kill() current_song.ffmpeg = None # Move next pipeline into active self.position = 0.0 self.current_song_in_list += 1 def get_next_song(self): if self.current_song_in_list + 1 >= 0 and self.current_song_in_list + 1 < len( self.song_list ): return self.song_list[self.current_song_in_list + 1] return None def forward_song(self): current_song = self.get_current_song() if current_song and current_song.ffmpeg: current_song.ffmpeg.kill() current_song.ffmpeg = None if self.current_song_in_list < len( self.song_list ): self.current_song_in_list += 1 def load_song(self, song: Song): if song: song.ffmpeg = self._open_ffmpeg(song) song.preload_state = 2 def preload_next_threaded(self): next_song = self.get_next_song() if not next_song or next_song.preload_state: return next_song.preload_state = 1 threading.Thread(target=self.load_song, args=(next_song,)).start() return None def _callback(self, outdata, frames, t, status): with self.lock: needed = frames * self.channels * 2 data = b"" if self.playing: current_song = self.get_current_song() if not current_song or current_song.ffmpeg is None: next_song = self.get_next_song() if next_song: if next_song.preload_state == 2: self._start_next() elif next_song.preload_state == 0: self.preload_next_threaded() elif current_song: try: data = current_song.ffmpeg.stdout.read(needed) or b"" except BlockingIOError: pass self.position += len(data) / (self.samplerate * self.channels * 2) if self.position >= current_song.duration - 10: self.preload_next_threaded() else: next_song = self.get_next_song() if next_song and next_song.ffmpeg: if next_song.ffmpeg.poll() is None: next_song.ffmpeg.kill() next_song.ffmpeg = None next_song.preload_state = 0 if current_song.ffmpeg.poll() is not None and len(data) < needed: if round(self.position, 2) >= current_song.duration - 0.1: self._start_next() current_song = self.get_current_song() if ( current_song and current_song.ffmpeg is not None and current_song.ffmpeg.poll() is None ): try: new_data = ( current_song.ffmpeg.stdout.read( needed - len(data) ) or b"" ) except BlockingIOError: new_data = b"" self.position += len(new_data) / ( self.samplerate * self.channels * 2 ) data += new_data else: # if current_song.ffmpeg and current_song.ffmpeg.poll() is not None: # current_song.ffmpeg.kill() # current_song.ffmpeg = None current_song.ffmpeg = self._open_ffmpeg( current_song, self.position ) samples = np.frombuffer(data, dtype=np.int16) left = samples[0::2] right = samples[1::2] norm = 32769.0 x = left / norm y = right / norm points = list(zip(x, y)) # step = max(1, len(points) // 1000) # points = points[::step] self.oscilloscope_data_points.extend(points) outdata[: len(data)] = data outdata[len(data) :] = b"\x00" * (needed - len(data)) def save_state(self, path): with open(path,"w") as f: data = { "queue": [song.id for song in self.song_list], "current_song": self.current_song_in_list, "position": self.position } json.dump(data, f) def load_state(self, path): try: with open(path,"r") as f: data = json.load(f) self.song_list = [] for song in data["queue"]: songOBJ = song_data_to_Song(client.jellyfin.get_item(song), server) songOBJ.ffmpeg = None songOBJ.preload_state = 0 self.song_list.append(songOBJ) self.current_song_in_list = data['current_song'] self.seek(data['position']) except: return # while True: # duration = player.playback_info_to_duration(player.playback_info) # print("pos:", str(round((player.position*100)/(duration or 1.0)))+"%", player.position, '/', duration) # time.sleep(1)