seaq (pronounced "seek") allows you to extract text data from the web and process it with your favorite prompt and LLM model, all from your terminal.
seaq was initially created as an experiment in using LLMs to implement the idea of optimizing cognitive load, as presented by Dr. Justin Sung in this video. It was strongly inspired by fabric and was originally meant to be a Go-based alternative (as it turns out, fabric is now written in Go too).
- Support for multiple LLM providers.
- Built-in (OpenAI, Anthropic, Google).
- Any OpenAI-compatible providers via
connection.
- Scrape web pages with various engines.
- Fetch YouTube transcripts, Udemy transcripts, X threads.
- Adding patterns from a GitHub repository on demand.
- YAML-based configuration file.
# Fetch a YouTube video transcript with defaults in the `seaq.yaml` config file
seaq fetch youtube "446E-r0rXHI" | seaq# Get insights from an X thread using a local model
seaq fetch x "1883686162709295541" | seaq --pattern prime_mind --model ollama/smollm2:latest# Fetch a web page and chat with it
# `--auto` tells the built-in scrapper to automatically detect the main content of the page
seaq fetch page "https://modelcontextprotocol.io/introduction" --auto | seaq chatMake sure Go is installed before running the following command:
go install github.com/nt54hamnghi/seaq@latestYou may need to add the following environment variables to run seaq:
export GOROOT=/usr/local/go
export GOPATH=$HOME/go
# Add Go binary paths and local user binaries to the system PATH
export PATH=$GOPATH/bin:$GOROOT/bin:$HOME/.local/bin:$PATHRun seaq config setup to generate a new config file interactively. It will walk you through selecting:
- An LLM model
- A pattern repository (where you store your patterns)
- A pattern (prompt)
- A default remote GitHub repository to download patterns
These values act as defaults and can be overridden by CLI flags and/or arguments.
seaq config setupOnce completed, you will have a minimal config file like this:
model:
name: anthropic/claude-3-5-sonnet-latest
pattern:
name: take_note
remote: https://github.com/danielmiessler/fabric
repo: /home/nt54hamnghi/.config/seaq/patternsNote:
seaqreads API keys from environment variables. For example, if you're using a model from OpenAI,seaqexpects theOPENAI_API_KEYenvironment variable to hold the API key.
Supported environment variables:
OPENAI_API_KEYANTHROPIC_API_KEYGEMINI_API_KEYYOUTUBE_API_KEYCHROMA_URLX_AUTH_TOKENX_CSRF_TOKENUDEMY_ACCESS_TOKENOLLAMA_HOSTJINA_API_KEYFIRECRAWL_API_KEYSEAQ_LOG_LEVEL(default:info)SEAQ_CACHE_DURATION(default:24h)
The root command, seaq, processes input text with a specified model and pattern. It can read input either from a file or from standard input (pipe).
# Process input from a file
seaq -i input.txt
# Process piped input
echo "Some text" | seaqAvailable flags:
-m, --model string model to use
--hint string optional context to guide the LLM's focus
--no-stream disable streaming mode
--temperature float temperature to use (default 0.7)
-p, --pattern string pattern to use
-r, --repo string path to the pattern repository
-i, --input string input file
-c, --config string config file (default is $HOME/.config/seaq.yaml)
-o, --output string output file
-f, --force overwrite existing fileNote:
seaq chatis an experimental feature.
To use the chat feature, you need a running ChromaDB instance. To quickly start one:
docker run -d -p 7645:8000 --name chroma-core ghcr.io/chroma-core/chroma:0.5.0
# `seaq` requires the `CHROMA_URL` environment variable to be set
export CHROMA_URL=http://0.0.0.0:7645Like the root command, seaq chat accepts input from both a file and standard input (pipe).
# Process input from a file
seaq -i essay.md
# Process piped input
cat essay.md | seaq chat# List all available patterns
seaq pattern list
# Set and get the default pattern
# These will modify the `seaq.yaml` config file
seaq pattern set prime_mind
seaq pattern get# List all available models
seaq model list
# Set and get the default model
# These will modify the `seaq.yaml` config file
seaq model set ollama/smollm2:latest
seaq model getseaq fetch can fetch data from a variety of sources.
seaq fetch page "https://en.wikipedia.org/wiki/Go_(programming_language)"Or you can use firecrawl or jina as the scraping engine with --engine flag.
seaq fetch page --engine firecrawl "https://en.wikipedia.org/wiki/Go_(programming_language)"seaq fetch page --engine jina "https://en.wikipedia.org/wiki/Go_(programming_language)"Note:
jinarequires setting theJINA_API_KEYenvironment variablefirecrawlrequires setting theFIRECRAWL_API_KEYenvironment variable
seaq fetch youtube "https://www.youtube.com/watch?v=446E-r0rXHI"Or just with the video ID:
seaq fetch youtube "446E-r0rXHI"You can use --start and --end flags to filter the transcript.
# Fetch the transcript from 0:07 to 0:42
seaq fetch youtube "446E-r0rXHI" --start 0:07 --end 0:42To include metadata (e.g., video title, description, etc.) in the output, use the --metadata flag.
seaq fetch youtube "446E-r0rXHI" --metadataNote: Fetching metadata requires setting the
YOUTUBE_API_KEYenvironment variable.
seaq fetch udemy "https://www.udemy.com/course/course-name/learn/lecture/lecture-id"seaq fetch udemy also supports the --start and --end flags.
seaq fetch udemy "https://www.udemy.com/course/course-name/learn/lecture/lecture-id" --start 0:07 --end 0:42Note:
seaq fetch udemyrequires setting theUDEMY_ACCESS_TOKENenvironment variable. You can get this token by inspecting the cookies after logging in to your Udemy account.
seaq fetch x "https://x.com/morganb/status/1883686162709295541"Or just with the tweet ID:
seaq fetch x "1883686162709295541"By default, seaq will fetch the entire thread. Use -t or --tweet to get a single tweet.
seaq fetch x "1883686162709295541" --tweetNote:
seaq fetch xrequires setting theX_AUTH_TOKENandX_CSRF_TOKENenvironment variables. You can get these tokens by inspecting the cookies after logging in to your X account.
All fetch commands support caching extracted results. Cached entries are stored in cache.db in your config directory and are valid for 24 hours. Time-to-live for cached entries can be configured with the SEAQ_CACHE_DURATION environment variable.
To disable caching, use the --no-cache flag.
Fetch commands also support outputting JSON. Use the --json flag to enable this.
Please see seaq fetch --help for more information.
seaq supports using local Ollama models through the ollama provider prefix.
# Use a local Ollama model
seaq -m ollama/llama2 input.txtseaq connection allows you to manage OpenAI-compatible API endpoints. This is useful when you want to use alternative providers that implement the OpenAI API specification.
# Create a new connection
seaq connection create groq --url https://api.groq.com/openai/v1This will add a new connection to the config file.
connections:
- base_url: https://api.groq.com/openai/v1
provider: groq
model:
name: anthropic/claude-3-5-sonnet-latest
pattern:
name: take_note
remote: https://github.com/danielmiessler/fabric
repo: /home/nt54hamnghi/.config/seaq/patterns# List all configured connections
seaq connection list
PROVIDER BASE URL ENV KEY
groq https://api.groq.com/openai/v1 GROQ_API_KEY
openrouter https://openrouter.ai/api/v1 OPENROUTER_API_KEYThe ENV KEY column tells you the environment variable that needs to be set for the connection to work.
# Remove a connection
seaq connection remove groqOnce a connection is created, you can list all models.
seaq models listTo add new patterns from a remote repository, seaq expects the repository to have a top-level patterns directory with one or more patterns.
The directory structure should look like this:
patterns/
├── improve_prompt
│ └── system.md
├── prime_mind
│ └── system.md
└── write_blog
└── system.mdBy default, seaq will use pattern.remote in your config file as the remote repository. However, you can overwrite this with the --remote flag.
seaq pattern add improve_prompt --remote https://github.com/danielmiessler/fabric- Special thanks to Dr. Justin Sung for the inspiration
fabricproject for inspiring seaq's design and functionality
