Home | Tabby This is actually pretty cool and useful. Just tried this on my Mac locally of course and it seems to have quite good utility. What would be interesting for me would be to train it on my code and many projects 😅
Most of the can run locally have such a small training set they arnt worth it. Are more like the Markov chains from the subreddit simulator days.
There is one called orca that seems promising that will be released as OSS soon. Its running at comparable numbers to OpenAI 3.5.
But remember the LLM is only a very good auto complete.
@xuu@txt.sour.is Yeah that’s the problem I have really. Unless I can easily train the LLM on my own dataset(s) so I can autocomplete things I’ve done before and repeat the same/similar patterns, this whole this is just not worth it for me, because it’s basically just “dumb”.
@prologic@twtxt.net The hackathon project that I did recently used openai and embedded the response info into the prompt. So basically i would search for the top 3 most relevant search results to feed into the prompt and the AI would summarize to answer their question.
I see