🚀

Technical Documentation

To build this service, my implementation uses FastAPI, OpenAI’s LLM GPT-3.5 model, Supabase DB, AWS EC2, and a collection of Python utility functions to fetch, transform, and return code.

Local Setup

Step 1: Dependencies

TinyGen requires Python 3.9+. Install all dependencies using:
pip install -r requirements.txt
 
Libraries used:
  • difflib
  • os
  • OpenAI
  • python-dotenv
  • PyGithub
  • supabase
  • uvicorn
  • pydantic
  • fastapi
 

Step 2: Environment Variables

Set up environment variables by creating a .env file or configuring them via deployment secrets:
# ChatGPT API Access OPENAI_API_KEY="" # GitHub API Access GITHUB_TOKEN="" # Supabase Configuration SUPABASE_URL="" SUPABASE_KEY=""
 

Step 3: Running TinyGen Locally

Navigate to the app/ directory and run:
python3 -m uvicorn main:app --reload
 

FastAPI App

The run_tiny_gen function is the core of the FastAPI app. It processes requests to transform code from a public repository based on user prompts. It uses a bunch of utility functions:
  1. Fetch the original code
  1. Apply transformations via ChatGPT
  1. Generate a diff of the changes
 

Core API Components

  • TinyGenRequest: Captures the GitHub repo URL (repoUrl) and prompt (prompt).
  • DiffResponse: Returns the diff of the original and transformed code.
  • get_repo_files_as_string: Fetches the repository’s code as a string.
  • ask_chatgpt: Sends the prompt to ChatGPT and applies transformations.
  • calculate_code_diff: Computes the difference between original and modified code.
  • supabase.py: Handles storing the transformation results (logs) in Supabase DB.
  • router: Defines the /run API endpoint.
 
from fastapi import FastAPI, APIRouter, HTTPException from pydantic import BaseModel from utils.github_interaction import get_repo_files_as_string from utils.chatgpt_interaction import ask_chatgpt from utils.calculate_diff import calculate_code_diff router = APIRouter() class TinyGenRequest(BaseModel): repoUrl: str prompt: str class DiffResponse(BaseModel): diff: str @router.post("/run", response_model=DiffResponse) async def run_tiny_gen(request: TinyGenRequest): try: original_code = get_repo_files_as_string(request.repoUrl) fixed_code = ask_chatgpt(request.prompt, original_code) reflection_text = ( "Review changes against requirements: '{prompt}'. " "Reply with [CONFIDENT] for no further improvements needed, or " "[REVISION NEEDED] for more adjustments." ).format(prompt=request.prompt) code_for_reflection = "Modified Code:\n{modified} \n Original Code:\n{original}\n\n".format( original=original_code, modified=fixed_code) reflection_response = ask_chatgpt(reflection_text, code_for_reflection) # Check if revisions are needed if "[REVISION NEEDED]" in reflection_response: fixed_code = reflection_response diff = calculate_code_diff(original_code, fixed_code) return DiffResponse(diff=diff) except Exception as e: raise HTTPException(status_code=500, detail=str(e)) app = FastAPI() app.include_router(router)

Chaining and Custom Prompts

Chaining and custom prompts were used within the service to refine outputs and improve accuracy. This method enables the AI model to reflect on its responses and iterate on changes.
original_code = get_repo_files_as_string(request.repoUrl) fixed_code = ask_chatgpt(request.prompt, original_code) reflection_text = ( "Review changes against requirements: '{prompt}'. " "Reply with [CONFIDENT] for no further improvements needed, or " "[REVISION NEEDED] for more adjustments." ).format(prompt=request.prompt) code_for_reflection = "Modified Code:\n{modified} \n Original Code:\n{original}\n\n".format( original=original_code, modified=fixed_code)
 

Getting TinyGen Live!

To deploy, I hosted the FastAPI app on an EC2 instance with NGINX as a reverse proxy.

Nginx Configuration

server { listen 80; server_name [[ENTER_YOUR_INSTANCE_IP_HERE]]; location { proxy_pass http://127.0.0.1:8000; } }
notion image
 

Try Tinygen out!