#future
#llm-ops
uvx gitingest https://github.com/owner/repo
fetches the code in the Git repo suitable for passing to an LLM. #chatgpt
#github
#llm-ops
#code-agents
#llm-ops
#ai-coding
#code-agents
#dev
#github
#llm-ops
#prompt-engineering
#llm-ops
#llm-ops
#document-conversion
#image-generation
#llm-ops
#ai-coding
#code-agents
#llm-ops
#markdown
#future
#llm-ops
#future
#llm-ops
#prompt-engineering
#future
#llm-ops
#ai-coding
#code-agents
#document-conversion
#future
#llm-ops
#chatgpt
#llm-ops
#llm-ops
#learning
#lesson
#ai-coding
#dev
#future
#github
#llm-ops
#prompt-engineering
#document-conversion
#future
#llm-ops
#markdown
#ai-coding
#code-agents
#llm-ops
#prompt-engineering
#voice-cloning
#future
#llm-ops
#llm-ops
#prompt-engineering
#llm-ops
#ai-coding
#code-agents
#llm-ops
#write
#llm-ops
#voice-cloning
#learning
#lesson
#llm-ops
#llm-ops
#llm-ops
llm --save ffmpeg --model gpt-4.1-mini --extract --system 'Write an ffmpeg command'
which I can use like this:llm -t ffmpeg 'Crossfade a.mkv (1:00-1:30) with b.mkv (2:10-2:20), 3s duration'
#best-practices
#github
#llm-ops
#markdown
#ai-coding
baseUrl
vs baseURL
”, “add GA‑4 exam module”. → churn & rewrites.vitest
not installed, .dev.vars
absent, sub‑modules not cloned, network blocks.npm install
, env‑var templates, and submodule noteslint && test
(plus static‑analysis / self‑critique) before every response#llm-ops
#optimization
#learning
#automation
#llm-ops
#learning
#llm-ops
#llm-ops
#future
#llm-ops
#future
#llm-ops
#models
#ai-coding
#automation
#chatgpt
#code-agents
#llm-ops
#prompt-engineering
#llm-ops
#optimization
#llm-ops
#llm-ops
#llm-ops
#4
#1
#ai-coding
#github
#llm-ops
#code-agents
#llm-ops
#llm-ops
cat file.py | llm -t fabric:explain_code
Ref #future
#llm-ops
#markdown
#prompt-engineering
#best-practices
#future
#llm-ops
#markdown
#optimization
#prompt-engineering
#llm-ops
#database
#future
#llm-ops
#markdown
#models
#automation
#future
#llm-ops
#future
#llm-ops
#learning
#future
#llm-ops
#llm-ops
#ai-coding
#code-agents
#llm-ops
#prompt-engineering
#github
#html
#llm-ops
#web-dev
#llm-ops
#optimization
#github
#llm-ops
GEMINI_API_KEY=... uvx llm-min -i $DIR
#llm-ops
#markdown
#ai-coding
#cloud
#llm-ops
#future
#llm-ops
#automation
#code-agents
#llm-ops
#future
#llm-ops
uvx streamdown --exec 'llm chat'
lets you chat with an LLM using Markdown formatting. It's still a little rough at the edges. #llm-ops
#markdown
#automation
#code-agents
#future
#html
#llm-ops
#markdown
#prompt-engineering
#web-dev
#write
#ai-coding
#chatgpt
#llm-ops
#prompt-engineering
#future
#llm-ops
#prompt-engineering
cmdg
. #code-agents
#github
#llm-ops
#markdown
#prompt-engineering
#llm-ops
#code-agents
#llm-ops
#llm-ops
#llm-ops
#speech-to-text
#automation
#code-agents
#future
#llm-ops
#llm-ops
#learning
#future
#llm-ops
#learning
#ai-coding
#code-agents
#dev
#github
#llm-ops
#markdown
#prompt-engineering
ai!
comment to trigger changes and ai?
to ask questions.tmux
based LLM tool for the command line. It screen-grabs from tmux, which is powerful.make
sucks but is hard to beat. just
comes closest.yjs
is a good start but automerge
(Rust, WASM) is faster and may be better.#llm-ops
#llm-ops
#llm-ops
#future
#llm-ops
#llm-ops
WEBUI_SECRET_KEY=... uvx --python 3.11 open-webui serve
#llm-ops
#web-dev
Text generation Web UI is less so.
KoboldAI,
LMQL,
LM Studio,
GPT4All, etc are far behind.#ai-coding
#automation
#code-agents
#future
#github
#llm-ops
#llm-ops
#future
#llm-ops
#models
#future
#llm-ops
#llm-ops
#llm-ops
#automation
#code-agents
#llm-ops
#optimization
#prompt-engineering
#llm-ops
#speech-to-text
#voice-cloning
#llm-ops
#llm-ops
#future
#llm-ops
#speech-to-text
#voice-cloning
#llm-ops
#chatgpt
#llm-ops
#code-agents
#future
#llm-ops
#llm-ops
#learning
#lesson
#future
#llm-ops
#llm-ops
#llm-ops
#optimization
#learning
#future
#llm-ops
#prediction
#think
#future
#llm-ops
#ai-coding
#code-agents
#llm-ops
#markdown
#optimization
#prompt-engineering
#write
#llm-ops
#cloud
#code-agents
#github
#llm-ops
#llm-ops
#llm-ops
openai
library across multiple providers. #ai-coding
#llm-ops
#llm-ops
#ai-coding
#code-agents
#llm-ops
#llm-ops
#llm-ops
#llm-ops
#prompt-engineering
#ai-coding
#automation
#code-agents
#llm-ops
#prompt-engineering
#automation
#code-agents
#llm-ops
#prompt-engineering
#html
#llm-ops
#markdown
#llm-ops
#automation
#best-practices
#llm-ops
#optimization
#prompt-engineering
#ask
#future
#llm-ops
us.meta.llama3-2-11b-instruct-v1:0
if the model is in a US region. #llm-ops
#future
#llm-ops
#llm-ops
#very-hard
#hard
#gpu
#llm-ops
#future
#llm-ops
#llm-ops
#llm-ops
#future
#llm-ops
#automation
#future
#llm-ops
#future
#llm-ops
#llm-ops
#best-practices
#llm-ops
#optimization
#database
#llm-ops
#ai-coding
#code-agents
#document-conversion
#html
#llm-ops
#llm-ops
#database
#github
#llm-ops
#markdown
Today, 38 repos on GitHub support it#llm-ops
#code-agents
#llm-ops
#future
#llm-ops
#cloud
#image-generation
#llm-ops
#best-practices
#dev
#llm-ops
#prompt-engineering
/llms.txt
files as a way to share LLM prompts. #llm-ops
#markdown
#llm-ops
#future
#llm-ops
#future
#llm-ops
#optimization
#prompt-engineering
console.llm()
function, a browser extension is the best way, because some pages have Content-Security-Policy that block eval, form submission, fetch from other domains, and script execution. #html
#llm-ops
#future
#llm-ops
#prediction
#llm-ops
#ai-coding
#github
#llm-ops
#ai-coding
#code-agents
#dev
#github
#llm-ops
#markdown
#ai-coding
#llm-ops
#document-conversion
#llm-ops
#markdown
#future
#llm-ops
#llm-ops
#code-agents
#llm-ops
#markdown
#prompt-engineering
#ai-coding
#automation
#code-agents
#future
#llm-ops
#prompt-engineering
#ai-coding
#code-agents
#future
#llm-ops
#llm-ops
<reflection>...</reflection>
tags. #future
#llm-ops
#ai-coding
#llm-ops
#markdown
#write
#gpu
#llm-ops
#optimization
devices:
on Docker Compose lets you specify NVIDIA GPU devices#llm-ops
#prompt-engineering
#try
#impossible
ffmpeg -i filename [YOUR OPTIONS]
.pip install llmfoundry
#chatgpt
#gpu
#llm-ops
#chatgpt
#llm-ops
#llm-ops
#impossible
#ai-coding
#automation
#best-practices
#future
#llm-ops
#optimization
#prompt-engineering
#llm-ops
#prediction
#llm-ops
#llm-ops
#llm-ops
#llm-ops
#future
#llm-ops
#networking
#llm-ops
Gr brx vshdn Fdhvdu flskhu?
is a quick way to assess LLM capability. Ref #llm-ops
#llm-ops
#gpu
#llm-ops
#optimization
#prompt-engineering
#document-conversion
#llm-ops
#llm-ops
logit_bias
trick to limit choices in output. See get_choice()
#llm-ops
#markdown
#llm-ops
#ai-coding
#future
#llm-ops
#learning
#lesson
#future
#llm-ops
#future
#llm-ops
#markdown
#models
#prompt-engineering
#llm-ops
#llm-ops
#gpu
#llm-ops
#github
#llm-ops
#dev
#database
#llm-ops
#todo
#gpu
#llm-ops
#markdown
#code-agents
#future
#llm-ops
#future
#llm-ops
#learning
#prediction
#database
#llm-ops
#chatgpt
#llm-ops
#ai-coding
#chatgpt
#code-agents
#future
#github
#llm-ops
#llm-ops
#future
#llm-ops
#learning
#lesson
#llm-ops
#gpu
#llm-ops
#future
#llm-ops
#llm-ops
#llm-ops
#llm-ops
#llm-ops
#llm-ops
#optimization
#llm-ops
#ai-coding
#automation
#code-agents
#future
#llm-ops
#optimization
#prompt-engineering
#learning
#llm-ops
#markdown
#optimization
#gpu
#llm-ops
#future
#llm-ops
#llm-ops
#future
#gpu
#llm-ops
#llm-ops
#code-agents
#llm-ops
#future
#llm-ops