#embeddings
#gpu
#gpu
#llm-ops
#ai-coding-tools
#gpu
uvx lida ui --port 8080 --docs
works.export TCL_LIBRARY=C:/Users/Anand/AppData/Roaming/uv/python/cpython-3.13.0-windows-x86_64-none/tcl/tcl8.6
to point it to my TCL installation for charts to work. I also chose to export OPENAI_BASE_URL=https://llmfoundry.straive.com/openai/v1
gpt-3.5-turbo-0301
(the default model) with gpt-4o-mini
in lida/web/ui/component*
#gpu
#markdown
#gpu
#gpu
#hosting
#gpu
#gpu
#future
#gpu
#image-generation
#models
#embeddings
#future
#gpu
#markdown
#models
#optimization
#prompt-engineering
#gpu
#llm-ops
#optimization
devices:
on Docker Compose lets you specify NVIDIA GPU devices#chatgpt
#gpu
#llm-ops
#cloud
#future
#gpu
#chatgpt
#embeddings
#gpu
#image-generation
#ai-coding-tools
#cloud
#gpu
#gpu
#llm-ops
#optimization
#prompt-engineering
#ai-coding-tools
#embeddings
#gpu
#gpu
#llm-ops
#cloud
#gpu
#hosting
#gpu
#llm-ops
#markdown
text-embedding-3-large
which can be truncated. The embedding values have descending importance, so picking the first n is a good approximation. Also, gpt-3.5-turbo-0125
is 50% cheaper. #embeddings
#gpu
#cloud
#gpu
#markdown
#gpu
#llm-ops
#gpu
#markdown
#gpu
#optimization
#chatgpt
#gpu
#models
#gpu
#future
#gpu
#chatgpt
#gpu
#chatgpt
#future
#gpu
#ai-coding-tools
#code-agents
#gpu
#gpu
#gpu
#llm-ops
#future
#gpu
#llm-ops
#ai-coding-tools
#github
#gpu
#markdown
#gpu
#optimization