Setup
Python
Pick one of the following ways to set up your Python environment.
Here is the requirements.txt
you will need to set up the environment.
altair
anthropic
anthropic[bedrock]
boto3
chatlas
faicons
ipykernel
langchain
langchain-anthropic
langchain-openai
langgraph
openai
palmerpenguins
pandas
pillow
playwright
plotly
plotnine
python-dotenv
querychat @ git+https://github.com/posit-dev/querychat
requests
ridgeplot
seaborn
shiny
shinychat
shinywidgets
tokenizers
conda
conda create -n shiny python=3.12
conda activate shiny
pip install -r requirements.txt
python + venv
python -m venv venv
source venv/bin/activate && pip install -r requirements.txt
uv
uv venv venv
source venv/bin/activate && uv pip install -r requirements.txt
IDE
I’m using Positron: https://positron.posit.co/, but feel free to use VS Code. We will not be working with Jupyter Notebooks in this workshop.
You will need the Shiny - VS Code Extension
Chat Model
GitHub Models
You will need to create a GitHub Personal Access Token (PAT). It does not need any context (e.g., repo, workflow, etc).
General instructions from the GitHub docs on creating a PAT: https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens#creating-a-personal-access-token-classic
Instructions from the GitHub Models docs: https://github.com/Azure-Samples/python-ai-agent-frameworks-demos/tree/main?tab=readme-ov-file#configuring-github-models
(optional) Local LLM: Ollama
- Download Ollama: https://ollama.com/
- Pick one of the many llama models on their model page from: https://ollama.com/search.
- Pick any random model that will fit on your computer
- You can pick multiple models if you’d like, we will compare results during workshop.
- Here are a few example models with their download sizes you can try:
Model | Download Size | URL | Install Command |
---|---|---|---|
qwen3:0.6b | 523MB | https://ollama.com/library/qwen3 | ollama run qwen3:0.6b |
qwen | 5.2GB | - | ollama run qwen3 |
Phi 4 mini | 3.2GB | https://ollama.com/library/phi4-reasoning | ollama run phi4-mini-reasoning |
devstral | 14GB | https://ollama.com/library/devstral | ollama run devstral |
llama4 | 67GB | https://ollama.com/library/llama4 | ollama run llama4 |
llama4:128x17b | 245GB | - | ollama run llama4:128x17b |
(Optional): Chat provider with API (paid)
If you pay for Claude, OpenAI, etc access with their web/desktop application, this is a separate purchase for the API key. Depending on your usage, you may even find that paying for the API key could be cheaper!
Anthropic Claude
- Sign up at https://console.anthropic.com.
- Load up enough credit so you won’t be sad if something goes wrong.
- Create a key at https://console.anthropic.com/settings/keys
Google Gemini
- Log in to https://aistudio.google.com with a google account
- Click create API key & copy it to the clipboard.
OpenAI ChatGPT
- Sign up at https://openai.com/
- Create a key at https://platform.openai.com/api-keys
Check your installation
- Clone / download this repository: <>
- Activate the Python environment with the packages you just installed
- Run the test-install.py app and script with:
shiny run test-install.py
You should see output like this
$ shiny run test-install.py
INFO: Started server process [46615]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)