#llm-ops<link rel="alternate" type="text/markdown" title="LLM-friendly version" href="/llms.txt"> is an emerging approach for pointing to LLMs.txt. It works. I asked Codex to read the CloudFlare vitest page. It read the file truncating the middle, found the <link rel="alternate" type="text/markdown" href="https://developers.cloudflare.com/workers/testing/vitest-integration/write-your-first-test/index.md"/ link in it, and reasoned "Considering fetching markdown instructions" and fetched the Markdown page. Giles' Blog #html #llm-ops #markdown#image-generation #llm-ops #markdown#future #llm-ops #prompt-engineering#ai-coding #code-agents #future #github #llm-ops #markdown#llm-ops#ai-coding #code-agents #llm-ops#ai-coding #code-agents #llm-ops#ai-coding #code-agents #llm-ops#automation #code-agents #github #llm-ops #prompt-engineering#llm-ops #impossible#future #llm-ops#future #llm-ops#code-agents #future #llm-ops #markdown #medium#llm-ops#ai-coding #cloud #code-agents #future #llm-ops #markdown#llm-ops#llm-ops#llm-ops#gpu #llm-ops #prediction#llm-ops#llm-ops#future #llm-opsuvx gitingest https://github.com/owner/repo fetches the code in the Git repo suitable for passing to an LLM. #chatgpt #github #llm-ops#code-agents #llm-ops#ai-coding #code-agents #dev #github #llm-ops #prompt-engineering#llm-ops#llm-ops#document-conversion #image-generation #llm-ops#ai-coding #code-agents #llm-ops #markdown#future #llm-ops#future #llm-ops #prompt-engineering#future #llm-ops#ai-coding #code-agents #document-conversion #future #llm-ops#chatgpt #llm-ops#llm-ops #learning #lesson#document-conversion #future #llm-ops #markdown#ai-coding #dev #future #github #llm-ops #prompt-engineering#ai-coding #code-agents #llm-ops #prompt-engineering #voice-cloning#llm-ops #prompt-engineering#future #llm-ops#llm-ops#ai-coding #code-agents #llm-ops #write#llm-ops #voice-cloning #learning #lesson#llm-ops#llm-ops#llm-opsllm --save ffmpeg --model gpt-4.1-mini --extract --system 'Write an ffmpeg command'
which I can use like this:llm -t ffmpeg 'Crossfade a.mkv (1:00-1:30) with b.mkv (2:10-2:20), 3s duration'
#best-practices #github #llm-ops #markdown #ai-codingbaseUrl vs baseURL”, “add GA‑4 exam module”. → churn & rewrites.vitest not installed, .dev.vars absent, sub‑modules not cloned, network blocks.npm install, env‑var templates, and submodule noteslint && test (plus static‑analysis / self‑critique) before every response#llm-ops #optimization #learning#automation #llm-ops #learning#llm-ops#llm-ops#future #llm-ops#future #llm-ops #models#ai-coding #automation #chatgpt #code-agents #llm-ops #prompt-engineering#llm-ops #optimization#llm-ops#llm-ops#llm-ops #4 #1#ai-coding #github #llm-ops#code-agents #llm-ops#llm-opscat file.py | llm -t fabric:explain_code Ref #future #llm-ops #markdown #prompt-engineering#best-practices #future #llm-ops #markdown #optimization #prompt-engineering#llm-ops#database #future #llm-ops #markdown #models#automation #future #llm-ops#future #llm-ops #learning#future #llm-ops#llm-ops#ai-coding #code-agents #llm-ops #prompt-engineering#github #html #llm-ops #web-dev#llm-ops #optimization#github #llm-opsGEMINI_API_KEY=... uvx llm-min -i $DIR #llm-ops #markdown #ai-coding#cloud #llm-ops#future #llm-ops#automation #code-agents #llm-ops#future #llm-opsuvx streamdown --exec 'llm chat' lets you chat with an LLM using Markdown formatting. It's still a little rough at the edges. #llm-ops #markdown#automation #future #html #llm-ops #prompt-engineering #write #ai-coding#chatgpt #llm-ops #prompt-engineering#future #llm-ops #prompt-engineeringcmdg. #code-agents #github #llm-ops #markdown #prompt-engineering#llm-ops#code-agents #llm-ops#llm-ops#llm-ops #speech-to-text#automation #code-agents #future #llm-ops#llm-ops #learning#future #llm-ops #learning#ai-coding #code-agents #dev #github #llm-ops #markdown #prompt-engineeringai! comment to trigger changes and ai? to ask questions.tmux based LLM tool for the command line. It screen-grabs from tmux, which is powerful.make sucks but is hard to beat. just comes closest.yjs is a good start but automerge (Rust, WASM) is faster and may be better.#llm-ops#llm-ops#llm-ops#future #llm-ops#llm-opsWEBUI_SECRET_KEY=... uvx --python 3.11 open-webui serve #llm-ops #web-dev
Text generation Web UI is less so.
KoboldAI,
LMQL,
LM Studio,
GPT4All, etc are far behind.#ai-coding #automation #code-agents #future #github #llm-ops#llm-ops#future #llm-ops #models#future #llm-ops#llm-ops#llm-ops#automation #code-agents #llm-ops #optimization #prompt-engineering#llm-ops #speech-to-text #voice-cloning#llm-ops#llm-ops#future #llm-ops #speech-to-text #voice-cloning#llm-ops#chatgpt #llm-ops#code-agents #future #llm-ops#llm-ops #learning #lesson#future #llm-ops#llm-ops#llm-ops #optimization #learning#future #llm-ops #prediction #think#future #llm-ops#ai-coding #code-agents #llm-ops #markdown #optimization #prompt-engineering #write#llm-ops#cloud #code-agents #github #llm-ops#llm-ops
#llm-opsopenai library across multiple providers. #ai-coding #llm-ops#llm-ops#ai-coding #code-agents #llm-ops#llm-ops#llm-ops#llm-ops #prompt-engineering#ai-coding #automation #code-agents #llm-ops #prompt-engineering#automation #code-agents #llm-ops #prompt-engineering#html #llm-ops #markdown#llm-ops#automation #best-practices #llm-ops #optimization #prompt-engineering #ask#future #llm-opsus.meta.llama3-2-11b-instruct-v1:0 if the model is in a US region. #llm-ops#future #llm-ops#llm-ops #hard #very-hard#gpu #llm-ops#future #llm-ops#llm-ops#llm-ops#future #llm-ops#automation #future #llm-ops#future #llm-ops#llm-ops#best-practices #llm-ops #optimization#database #llm-ops#ai-coding #code-agents #document-conversion #html #llm-ops#llm-ops#database #github #llm-ops #markdown
Today, 38 repos on GitHub support it#llm-ops#code-agents #llm-ops#future #llm-ops#cloud #image-generation #llm-ops#best-practices #dev #llm-ops #prompt-engineering/llms.txt files as a way to share LLM prompts. #llm-ops #markdown#llm-ops#future #llm-ops#future #llm-ops #optimization #prompt-engineeringconsole.llm() function, a browser extension is the best way, because some pages have Content-Security-Policy that block eval, form submission, fetch from other domains, and script execution. #html #llm-ops#future #llm-ops #prediction#llm-ops#ai-coding #github #llm-ops#ai-coding #code-agents #dev #github #llm-ops #markdown
#ai-coding #llm-ops#document-conversion #llm-ops #markdown#future #llm-ops#llm-ops#code-agents #llm-ops #markdown #prompt-engineering
#ai-coding #automation #code-agents #future #llm-ops #prompt-engineering#ai-coding #code-agents #future #llm-ops#llm-ops
<reflection>...</reflection> tags. #future #llm-ops#ai-coding #llm-ops #markdown #write#gpu #llm-ops #optimizationdevices: on Docker Compose lets you specify NVIDIA GPU devices#llm-ops #prompt-engineering #try #impossibleffmpeg -i filename [YOUR OPTIONS].pip install llmfoundry#chatgpt #gpu #llm-ops#chatgpt #llm-ops#llm-ops #impossible#ai-coding #automation #best-practices #future #llm-ops #optimization #prompt-engineering#llm-ops #prediction#llm-ops#llm-ops#llm-ops
#llm-ops#future #llm-ops #networking#llm-opsGr brx vshdn Fdhvdu flskhu? is a quick way to assess LLM capability. Ref #llm-ops#llm-ops#gpu #llm-ops #optimization #prompt-engineering#document-conversion #llm-ops#llm-opslogit_bias trick to limit choices in output. See get_choice() #llm-ops #markdown#llm-ops#ai-coding #future #llm-ops #learning #lesson#future #llm-ops#future #llm-ops #markdown #models #prompt-engineering#llm-ops#llm-ops#gpu #llm-ops#github #llm-ops#dev #database #llm-ops #todo#gpu #llm-ops #markdown#code-agents #future #llm-ops#future #llm-ops #learning #prediction#database #llm-ops#chatgpt #llm-ops#ai-coding #chatgpt #code-agents #future #github #llm-ops#llm-ops#future #llm-ops #learning #lesson#llm-ops#gpu #llm-ops#future #llm-ops#llm-ops#llm-ops#llm-ops#llm-ops#llm-ops #optimization#llm-ops#ai-coding #automation #code-agents #future #llm-ops #optimization #prompt-engineering #learning#llm-ops #markdown #optimization#gpu #llm-ops#future #llm-ops#llm-ops#future #gpu #llm-ops#llm-ops#code-agents #llm-ops#future #llm-ops