Skip to content

Conversation

@2e0byo
Copy link
Contributor

@2e0byo 2e0byo commented Dec 28, 2025

This PR adds an http proxy for playback suitable for use over slow / unreliable connections. Unlike my previous proxy from years ago, this is a fully-featured (if minimal) http proxy sitting between gstreamer and tidal. It's also unfinished, but I'm opening it now for criticism / suggestions.

It is supposed to work as follows:

  • The proxy runs in a thread, running a python asyncio loop (for easier concurrency)
  • gstreamer is given a local (proxy) url
  • if the proxy has data in its cache it serves the cached data. Otherwise it streams data from the remote into the local cache and out to gstreamer.
    • if the stream finishes successfully, the cached data is finalised and will be used next time. If the stream is dropped (e.g. by the user skipping tracks before playback is finished), the cached data is dropped.

Note that caching is only applied when mpeg-DASH is not used, since caching an adaptive stream makes no sense (every chunk in mpeg-DASH is like a little file with one of multiple encodings adapting to network conditions. We could make this work by just retrying until we get the high-quality section, but that really defeats mpeg-DASH. Naturally, this means the cache won't work with formats requiring mpeg-DASH. I don't think that's a problem, as you can't listen to ultrasonic encoded audio over 4G anyway ;).

NOTE this pr is based off the uv PR, so most of the commits currently are from the uv / ruff port. If uv is rejected I'll rebase and get it working with whatever main is, although I hope it isn't as I gave up trying to get poetry to play nicely with gstreamer on nix :D

The work is:

  • implement http cache
  • support https if required
  • plug into mopidy-tidal's url transformer
  • add a config option to prefer the url over the stream (right now stream is preferred if using pkce)
  • add cache eviction
  • add an offline download api (this will never be supported in mopidy as AKAIK it has no such context, but we can ship a tiny cli so you can pull your playlists before going on a trip).

For the fun of it this is a zero-dep pure python impl.

@2e0byo
Copy link
Contributor Author

2e0byo commented Dec 28, 2025

Note that I've also bumped the python version to 3.12 because I wanted inline generics (def foo[T: baseClass](data: T)). If we need to support older python I'll backport later and test against them.

@2e0byo
Copy link
Contributor Author

2e0byo commented Dec 29, 2025

well tentative impl... I'll actually test it when I get back. Bet it doesn't work :D.

I'll stop force-pushing all the time when this goes out of draft.

@tehkillerbee
Copy link
Collaborator

Sorry, haven't yet had time to look at the PR.

I was wondering, are there any of these additions that makes better sense to add directly to tidalapi?

2e0byo added 2 commits January 3, 2026 15:50
I don't like it, but it's too late to SHOUT at the clouds.
@2e0byo
Copy link
Contributor Author

2e0byo commented Jan 3, 2026

NW on the PR at all, it's still been in flight too. Added cache eviction.

I don't think any of this belongs in tidalapi so far. Despite the number of commits here I've modeled this as a pure HTTP(S) proxy sitting between tidal and gstreamer. As such it will actually work for any streaming service which uses HTTP. The logic was that there's really no reason tidalapi should handle caching or storing or streaming: these are all things applications will have different ideas about and will need to solve differently.

The only thing I do think we need in tidalapi is a way to opt out of MPEG-DASH when using PKCE (if even possible: it might just be that for that you need the new encrypted download endpoints).

It might make sense to spin off the whole of gstreamer_proxy as a separate library: a tiny streaming http proxy built for music players. But right now I think that would lead to more frustration until it does everything we need here.

The remaining bit of architecture is to cache the ID -> path conversion somewhere, so playback can work via the cache without needing tidalapi. That will allow full offline mode for downloaded content. The obvious solution is just to store an entry in the cache DB when we kick off a download, and then use that to build the cache URL where we can without needing the internet. But I want to think this through, because "just" writing an HTTP proxy was really motivated by getting this up and running as easily as possible.

BTW we're now at the point where you can grab the progress bar and drag it around both when proxying and when serving from the cache and it will "just work". At least I can't make it crash any more. Current functionality:

  • only works with oauth
  • serves local results from the cache, proxies results we don't have
  • supports seeking (range requests) in both local and proxied results
  • saves only complete records to disk
  • updates access time whenever a record is served (i.e. a track is played)
  • supports LRU cache eviction (not currently enabled: needs to be an option
  • cleans up partial insertions on startup in case of a previous crash

The odd thing is that a "real" db would be much easier to do some of this with (we could just use transactions for rollback...) but sqlite is in python and available everywhere, so we'll use that. (I did consider not using a DB but it ended up much easier this way.)

This is a strict superset of the other 2 PRs, so it might make more sense to look at those first when you do get round to it.

I'll try to get the uv tidalapi PR in in the next few days. Happy new year!

@2e0byo
Copy link
Contributor Author

2e0byo commented Jan 4, 2026

With this push everything is implemented except the pkce + offline download. I have a POC of the offline download and it's going to be sufficiently complicated I'll land it in a separate PR, either into this branch or upstream. For PKCE I'll wait for your input, but right now we could just document this as only working with oauth and look into it later.

Changes:

  • it seems we can use a thread, not a process, to run the cache. An earlier lockup which I was convinced was the GIL appears to be something inside gstreamer itself. On nixos with this flake I can't play urls until I've first played a local file (? but my problem, not yours).
  • the cache DB now caches uri -> path conversion. This allows the proxy to continue caching at the HTTP level, whilst removing the need for the network when looking up a track we have cached. This in turn permits fully offline music playing for cached music. I'd like to make mopidy-tidal more offline aware full stop (currently it's a bit of a lottery if you'll hit a path requiring login, although most paths will just error and skip).

Naturally you might ask if the cache shouldn't just work with uris all along. I don't think that's a bad way to build it, but I've gone for the other way now :D. I did consider appending a custom http query param like _mopidy_tidal_uri=tidal:track:0:0:0 to the generated uri and having the cache strip it before hitting upstream, but decided this is cleaner.

@2e0byo
Copy link
Contributor Author

2e0byo commented Jan 4, 2026

Drat we do need to add a a new config option for the cache max size and pass it through.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants