#best-practices
#code-agents
#future
#prompt-engineering
#prediction
protected_group
; show SHAP top features per group; propose three policy levers to reduce disparity.#automation
#code-agents
#optimization
#prompt-engineering
#ai-coding
#best-practices
#code-agents
#dev
#github
#prompt-engineering
/init
in Claude Code / Codex / ... or have an LLM generate a developer guide.-c model_reasoning_effort=high
in Codex.#ai-coding
#code-agents
#dev
#github
#llm-ops
#prompt-engineering
record
fish script with an echo
reminding me to write the meeting goal, my role, practice kind candor, and measure effectiveness.) #automation
#best-practices
#future
#prompt-engineering
#ai-coding
#automation
#code-agents
#optimization
#prompt-engineering
#best-practices
#optimization
#prompt-engineering
#ai-coding
#automation
#best-practices
#code-agents
#future
#optimization
#prompt-engineering
#best-practices
#future
#optimization
#prompt-engineering
#future
#llm-ops
#prompt-engineering
#automation
#dev
#gpu
#markdown
#optimization
#prompt-engineering
#prediction
#code-agents
#prompt-engineering
#automation
#code-agents
#future
#optimization
#prompt-engineering
#best-practices
#future
#optimization
#prompt-engineering
#automation
#future
#optimization
#prompt-engineering
#ai-coding
#automation
#code-agents
#dev
#future
#prompt-engineering
#best-practices
#optimization
#prompt-engineering
spec.md
to AI coding agents rather than directly typing-in prompts. This lets you meta-prompt and (collaboratively) iterate on the spec.md
, version the prompts as specs, and generate specs as documentation. Tanika Gupta #ai-coding
#code-agents
#future
#github
#markdown
#prompt-engineering
apply_patch()
function. Implement this in JavaScript #github
#markdown
#prompt-engineering
#medium
#ai-coding
#code-agents
#future
#prompt-engineering
#prediction
#markdown
#prompt-engineering
#ai-coding
#best-practices
#code-agents
#dev
#future
#github
#markdown
#prompt-engineering
#ai-coding
#dev
#future
#github
#llm-ops
#prompt-engineering
#code-agents
#dev
#markdown
#prompt-engineering
#medium
#ai-coding
#automation
#code-agents
#dev
#github
#markdown
#prompt-engineering
#ai-coding
#best-practices
#code-agents
#optimization
#prompt-engineering
#ai-coding
#code-agents
#llm-ops
#prompt-engineering
#voice-cloning
#llm-ops
#prompt-engineering
prompts.md
, told Codex "prompts.md has a prompt under the "# Improve schema" section starting line 294. This is a prompt that will be passed to Claude Code to implement. Ask me questions as required and improve the prompt so that the results will be in line with my expectations, one-shot." After a few discussions, it generated this remarkable prompt. This prompt was easy for me to review AND easy for Claude Code to understand because of the lack of inconsistencies. #ai-coding
#code-agents
#prompt-engineering
#dev
#future
#prompt-engineering
#ai-coding
#automation
#best-practices
#code-agents
#dev
#prompt-engineering
#ai-coding
#best-practices
#code-agents
#prompt-engineering
#code-agents
#future
#prompt-engineering
"type": "custom"
that lets it write code as an argument to a tool call. Great for code / SQL generation. Even more powerfully, you can generate output following specific grammars, e.g. STL files, PostgreSQL dialect, Mermaid/PlantUML diagrams, OpenAPI specs, Vega-Lite JSONs, Cron expressions, GraphQL SDLs, Dockerfiles, Terraform HCLs, or any DSL! # #ai-coding
#automation
#code-agents
#github
#markdown
#prompt-engineering
#chatgpt
#prompt-engineering
#prompt-engineering
node --experimental-sea-config sea-config.json
builds standalone binaries.node:
prefix for built-in imports. import { createServer } from 'node:http';
node --watch file.js
auto-reloads when file.js
or dependencies change.node --env-file=.env
loads .env
as environment variables.node:test
is a full-featured test framework with --watch
and coverage.#best-practices
#chatgpt
#code-agents
#future
#github
#markdown
#optimization
#prompt-engineering
#automation
#future
#markdown
#prompt-engineering
#automation
#prompt-engineering
#speech-to-text
#voice-cloning
#ai-coding
python
asks it to use uv
.#future
#prompt-engineering
#prediction
#ai-coding
#code-agents
#document-conversion
#prompt-engineering
#write
#prompt-engineering
#ask
ask clarifying questions when needed
#code-agents
#future
#prompt-engineering
#search
#try
#optimization
#prompt-engineering
#ai-coding
#ai-coding
#automation
#code-agents
#dev
#future
#markdown
#optimization
#prompt-engineering
#ai-coding
#automation
#code-agents
#dev
#github
#markdown
#prompt-engineering
claude --debug
shows what Claude Code is doing behind a scenes -- and is a good way to understand hidden / undocumented features..claude/commands/
.CLAUDE.md
, AGENTS.md
and GEMINI.md
into a CONVENTIONS.md
#best-practices
#code-agents
#prompt-engineering
web
tool to access up-to-date information…"<immersive> id="…" type="text/markdown"
"canmore
tool creates and updates textdocs that are shown in a "canvas"…"search_query
: …"thought
"#ai-coding
#automation
#code-agents
#github
#prompt-engineering
#ai-coding
#automation
#chatgpt
#code-agents
#llm-ops
#prompt-engineering
#code-agents
#prompt-engineering
#automation
#best-practices
#document-conversion
#future
#optimization
#prompt-engineering
#write
#ai-coding
#automation
#optimization
#prompt-engineering
#best-practices
#future
#prompt-engineering
#automation
#code-agents
#prompt-engineering
#code-agents
#prompt-engineering
cat file.py | llm -t fabric:explain_code
Ref #future
#llm-ops
#markdown
#prompt-engineering
#ai-coding
#automation
#code-agents
#prompt-engineering
Break down the implementation into: 1. Planning. 2. API stubs. 3. Implementation.
Use sub tasks and sub agents to conserve context
.Write it like @filename
. & -> & but &x -> &x
.Use ultrathink
.#prompt-engineering
tools
field rather than injecting tools into system prompt. Model has been trained to use tools
field.#best-practices
#future
#llm-ops
#markdown
#optimization
#prompt-engineering
#automation
#future
#prompt-engineering
#ai-coding
#automation
#code-agents
#dev
#future
#github
#prompt-engineering
#ai-coding
#code-agents
#dev
#future
#github
#prompt-engineering
#prompt-engineering
#ai-coding
#code-agents
#llm-ops
#prompt-engineering
#ai-coding
#code-agents
#dev
#github
#html
#markdown
#prompt-engineering
#future
#github
#markdown
#prompt-engineering
#automation
#future
#markdown
#optimization
#prompt-engineering
#best-practices
#chatgpt
#optimization
#prompt-engineering
#prompt-engineering
#automation
#code-agents
#future
#html
#llm-ops
#markdown
#prompt-engineering
#web-dev
#write
#ai-coding
#chatgpt
#llm-ops
#prompt-engineering
#future
#llm-ops
#prompt-engineering
cmdg
. #code-agents
#github
#llm-ops
#markdown
#prompt-engineering
#ai-coding
#embeddings
#future
#optimization
#prompt-engineering
#chatgpt
#gpu
#prompt-engineering
tqdm.pbar
can print logs while showing progress. It's worth noting such learnings until it becomes a habit. #ai-coding
#automation
#code-agents
#future
#github
#markdown
#prompt-engineering
#learning
#ai-coding
#code-agents
#future
#markdown
#prompt-engineering
#ai-coding
#code-agents
#dev
#prompt-engineering
#web-dev
#prompt-engineering
#ai-coding
#code-agents
#github
#markdown
#prompt-engineering
#ai-coding
#code-agents
#dev
#github
#llm-ops
#markdown
#prompt-engineering
ai!
comment to trigger changes and ai?
to ask questions.tmux
based LLM tool for the command line. It screen-grabs from tmux, which is powerful.make
sucks but is hard to beat. just
comes closest.yjs
is a good start but automerge
(Rust, WASM) is faster and may be better.#automation
#prompt-engineering
#ai-coding
#code-agents
#dev
#github
#html
#markdown
#prompt-engineering
#web-dev
jq
for HTML. #github
#html
#markdown
#prompt-engineering
#web-dev
#best-practices
#prompt-engineering
#future
#html
#prompt-engineering
#automation
#markdown
#optimization
#prompt-engineering
#ai-coding
#automation
#code-agents
#html
#optimization
#prompt-engineering
#web-dev
gid
to each element.#code-agents
#html
#markdown
#prompt-engineering
<script src="https://cdn.jsdelivr.net/npm/marked/marked.min.js"></script><script>document.body.innerHTML = marked(document.body.textContent);</script>
. Use for dynamically generated static sites.?format=markdown
vs ?format=html
vs ?format=json
. Use in APIs.Accept
header, serve Markdown or HTML. Send Vary: Accept
to indicate that the response depends on the Accept
header. Use for dynamic web apps.#document-conversion
#github
#markdown
#prompt-engineering
#automation
#future
#prompt-engineering
#ai-coding
#automation
#code-agents
#prompt-engineering
#automation
#code-agents
#llm-ops
#optimization
#prompt-engineering
#ai-coding
#code-agents
#prompt-engineering
#chatgpt
#future
#prompt-engineering
#prompt-engineering
#ai-coding
#code-agents
#llm-ops
#markdown
#optimization
#prompt-engineering
#write
#prompt-engineering
#automation
#prompt-engineering
#code-agents
#dev
#github
#prompt-engineering
#1
#llm-ops
#prompt-engineering
#ai-coding
#automation
#code-agents
#llm-ops
#prompt-engineering
#automation
#code-agents
#llm-ops
#prompt-engineering
#automation
#best-practices
#llm-ops
#optimization
#prompt-engineering
#ask
#automation
#code-agents
#prompt-engineering
#chatgpt
#code-agents
#github
#prompt-engineering
#ai-coding
#code-agents
#markdown
#prompt-engineering
#prompt-engineering
#ai-coding
#code-agents
#prompt-engineering
#best-practices
#dev
#llm-ops
#prompt-engineering
#dev
#github
#prompt-engineering
#automation
#code-agents
#future
#github
#prompt-engineering
#prompt-engineering
#future
#llm-ops
#optimization
#prompt-engineering
#best-practices
#optimization
#prompt-engineering
#learning
#ai-coding
#code-agents
#dev
#github
#prompt-engineering
#automation
#dev
#markdown
#prompt-engineering
#best-practices
#github
#prompt-engineering
#prompt-engineering
#python
#speech-to-text
<link rel="modulepreload">
lets you load and compile modules early! #future
#html
#prompt-engineering
#code-agents
#dev
#future
#markdown
#prompt-engineering
#write
#code-agents
#llm-ops
#markdown
#prompt-engineering
#ai-coding
#automation
#code-agents
#future
#llm-ops
#prompt-engineering
#future
#prompt-engineering
#ai-coding
#future
#markdown
#prompt-engineering
#prediction
search_enterprise(query: str)
tool and a hint(M365Copilot_language: str)
tool as assistants. #code-agents
#prompt-engineering
#optimization
#prompt-engineering
#ai-coding
#automation
#code-agents
#dev
#future
#github
#markdown
#prompt-engineering
#llm-ops
#prompt-engineering
#try
#impossible
ffmpeg -i filename [YOUR OPTIONS]
.pip install llmfoundry
#code-agents
#future
#prompt-engineering
#ai-coding
#automation
#best-practices
#future
#llm-ops
#optimization
#prompt-engineering
#gpu
#llm-ops
#optimization
#prompt-engineering
#hosting
#prompt-engineering
#server
#web-dev
#code-agents
#prompt-engineering
#future
#llm-ops
#markdown
#models
#prompt-engineering
#best-practices
#dev
#future
#prompt-engineering
#prediction
#automation
#future
#prompt-engineering
#prompt-engineering
#web-dev
#future
#html
#prompt-engineering
#web-dev
#ai-coding
#automation
#prompt-engineering
#speech-to-text
#tts
#future
#optimization
#prompt-engineering
#ai-coding
#prompt-engineering
#todo
#ans
toml
is part of the Python 3.11 standard library!{'code', 'optimized_code'}
will generate code
and then optimize it. #code-agents
#future
#optimization
#prompt-engineering
#prompt-engineering
#automation
#prompt-engineering
#lesson
#github
#markdown
#prompt-engineering
#markdown
#optimization
#prompt-engineering
#ai-coding
#automation
#code-agents
#future
#llm-ops
#optimization
#prompt-engineering
#learning
#ai-coding
#automation
#code-agents
#prompt-engineering
#try
#chatgpt
#future
#prompt-engineering
#automation
#chatgpt
#code-agents
#prompt-engineering
#speech-to-text
#tts