-
-
Notifications
You must be signed in to change notification settings - Fork 115
Description
I primarily use writing tools as a form of fast and robust spellcheck across different apps on Mac OS, I recently took a break from using the tool, which worked previously, while I set up a dedicated home server for running llms locally on my network. It uses ollama to run models and is started with the Ollama serve command. This ollama instance is confirmed to work with other programs, however, upon reinstalling writing tools to my machine, and asking it to proof the selected text using my ollama instance, I encounter an error. When tried locally and with my home server, ollama does not receive the text that it is supposed to proof. It instead deletes the selected text and replaces it with a response that the llm spits back when provided the prompt that it has been given by the preset that I use, without any other information, such as my selected text. Usually I receive outputs like "this is exactly what i was looking for" or "REQUEST CANNOT BE PROCESSED WITH PROVIDED TEXT".