Reference Python

Prompt Engineering for Python

Not general prompt theory — specific prompts for specific situations you face every day writing Python. Pydantic models, FastAPI endpoints, SQLAlchemy queries, pytest patterns, and Django. A recipe book, not a textbook.

Last reviewed: Apr 22 2026


Part 1: Why Python Prompting Is Different

Python is dynamically typed by default. That single fact changes how AI behaves when you ask it to write Python code. Without type information, AI makes assumptions about what your functions accept, what they return, and how your data is structured — and those assumptions are often wrong in ways that only surface at runtime, not when you look at the code.

The developers who get the best results from AI with Python are the ones who have adopted type hints. Not because type hints are inherently good (though they are), but because type hints give AI the same constraint layer that TypeScript gives TypeScript developers. When AI knows a function returns User | None and accepts CreateUserRequest, it generates code that handles both branches and uses the right field names. Without those hints, it invents field names and ignores the null case.

The Python Prompt Formula

Pydantic models + type annotations + patterns + constraints + task. Paste your Pydantic models, annotate your function signatures, name the patterns you want (async, dependency injection, repository pattern), list what to avoid, then describe what to build. This produces code that integrates cleanly with your existing codebase on the first generation.

The Three Core Differences

Three things make Python AI prompting distinct from other languages:

Your CLAUDE.md / .cursorrules for Python

Before writing any prompts, set up a project context file. This is the configuration AI reads at the start of every session — the rules that apply to every interaction without you repeating them:

# Project Context

## Stack
- Python 3.12+
- FastAPI 0.110+ with async/await throughout
- Pydantic v2 for all data validation and serialization
- SQLAlchemy 2.x (async) with asyncpg driver
- Alembic for database migrations
- pytest + httpx (AsyncClient) for testing

## Patterns to follow
- All functions have type annotations — parameters and return types
- Pydantic models for all request bodies, response models, and config
- Async throughout — no sync database calls in async routes
- Repository pattern for database access — no raw SQL in route handlers
- Dependency injection via FastAPI Depends() for DB sessions and auth
- Use Python 3.10+ union syntax: `str | None` not `Optional[str]`

## Patterns to avoid
- Do NOT use mutable default arguments (def f(x=[]) is a bug)
- Do NOT use `Optional[X]` — use `X | None` instead
- Do NOT use `dict` as a return type — define a Pydantic model
- Do NOT use `print()` for logging — use the logger from app/core/logging.py
- Do NOT access settings via os.environ directly — use the Settings class
- Do NOT use `*` imports

## Error handling
- HTTPException for API errors with appropriate status codes
- Custom exception classes in app/exceptions.py for domain errors
- All exceptions caught and logged before re-raising

## Testing
- pytest with anyio for async tests
- Use fixtures in conftest.py — no setup/teardown in test classes
- httpx AsyncClient for route integration tests
- Test files mirror source: app/routes/users.py → tests/routes/test_users.py

Part 2: Pydantic-First Prompting

Pydantic models in Python are the equivalent of TypeScript interfaces: they define the shape of your data and constrain what AI generates. Always paste your Pydantic models before asking AI to implement anything that touches that data.

Define Models Before Implementation

You

Here are my Pydantic models:

from pydantic import BaseModel, EmailStr, Field
from datetime import datetime
from uuid import UUID

class UserBase(BaseModel):
    email: EmailStr
    full_name: str
    is_active: bool = True

class UserCreate(UserBase):
    password: str = Field(min_length=8)

class UserUpdate(BaseModel):
    email: EmailStr | None = None
    full_name: str | None = None
    is_active: bool | None = None

class UserResponse(UserBase):
    id: UUID
    created_at: datetime

    model_config = {"from_attributes": True}

Write a UserRepository class that uses SQLAlchemy async session for CRUD operations. Methods: get_by_id, get_by_email, create, update, delete. Use the models above for input/output types. Hash passwords with passlib in create.

Because the models are explicit, AI generates field names that match exactly (full_name, not name or username), handles the UUID and datetime types correctly, and respects the optional fields in UserUpdate. Without the models, you'd get a generic implementation that needs significant manual adjustment.

Generating Models from a Description

The reverse pattern: describe your domain and ask AI to draft the Pydantic models, then critique them before using them:

You

Draft Pydantic v2 models for a project management system. Entities: Project, Task, User, Comment. Requirements:

  • A Task belongs to a Project and optionally has an assignee (User)
  • Tasks have status: todo / in-progress / review / done
  • Tasks have priority: low / medium / high / urgent
  • Comments belong to Tasks and have an author (User)
  • All entities have UUID primary keys and created_at timestamps
  • Separate Create, Update, and Response models for each entity
  • Response models use model_config = {"from_attributes": True} for ORM compatibility

After generating the models, list any design decisions you made that I might want to change.

The "list design decisions" instruction is important. AI will make choices about nullable fields, optional vs required fields, and enum definitions that may not match your intent. Reviewing those decisions before building on the models saves hours of downstream refactoring.

Using Validators for Business Rules

You

Add Pydantic v2 validators to the TaskCreate model:

class TaskCreate(BaseModel):
    title: str
    due_date: datetime | None = None
    priority: Priority = Priority.MEDIUM
    project_id: UUID

Rules: title must be 1-200 characters (strip whitespace). If due_date is provided, it must be in the future. Use @field_validator and @model_validator as appropriate. Use Pydantic v2 syntax — not v1.

Always Specify Pydantic Version

Pydantic v1 and v2 have different decorator syntax, different validator APIs, and different configuration patterns. AI's training data contains both. Always say "Pydantic v2" explicitly, or you'll get v1-style @validator decorators and class Config blocks in a v2 codebase.


Part 3: FastAPI Endpoints

FastAPI is one of the best frameworks to use with AI because it's heavily typed, well-documented, and follows consistent patterns. When you include your Pydantic models and dependency injection setup, AI generates endpoints that integrate cleanly with your existing structure.

The Full Endpoint Prompt

You

Write a FastAPI router for task management. Here are the relevant models and dependencies:

# Models (defined in app/schemas/task.py)
class TaskCreate(BaseModel): ...
class TaskUpdate(BaseModel): ...
class TaskResponse(BaseModel): ...

# Dependencies (defined in app/dependencies.py)
async def get_db() -> AsyncGenerator[AsyncSession, None]: ...
async def get_current_user(token: str = Depends(oauth2_scheme)) -> User: ...

# Repository (defined in app/repositories/task.py)
class TaskRepository:
    async def get_by_id(self, id: UUID) -> Task | None: ...
    async def list_by_project(self, project_id: UUID, skip: int, limit: int) -> list[Task]: ...
    async def create(self, data: TaskCreate, creator_id: UUID) -> Task: ...
    async def update(self, id: UUID, data: TaskUpdate) -> Task | None: ...
    async def delete(self, id: UUID) -> bool: ...

Endpoints needed:

  • GET /projects/{project_id}/tasks — list with pagination (skip/limit query params)
  • POST /projects/{project_id}/tasks — create, set creator to current user
  • GET /tasks/{task_id} — get single task
  • PATCH /tasks/{task_id} — partial update
  • DELETE /tasks/{task_id} — delete, return 204

All routes require authentication. Return 404 if task not found. Use APIRouter with prefix /api/v1.

from fastapi import APIRouter, Depends, HTTPException, status
from sqlalchemy.ext.asyncio import AsyncSession
from uuid import UUID

from app.dependencies import get_db, get_current_user
from app.repositories.task import TaskRepository
from app.schemas.task import TaskCreate, TaskUpdate, TaskResponse
from app.models.user import User

router = APIRouter(prefix="/api/v1", tags=["tasks"])


def get_task_repo(db: AsyncSession = Depends(get_db)) -> TaskRepository:
    return TaskRepository(db)


@router.get("/projects/{project_id}/tasks", response_model=list[TaskResponse])
async def list_tasks(
    project_id: UUID,
    skip: int = 0,
    limit: int = 20,
    repo: TaskRepository = Depends(get_task_repo),
    current_user: User = Depends(get_current_user),
):
    return await repo.list_by_project(project_id, skip=skip, limit=limit)


@router.post("/projects/{project_id}/tasks",
             response_model=TaskResponse,
             status_code=status.HTTP_201_CREATED)
async def create_task(
    project_id: UUID,
    data: TaskCreate,
    repo: TaskRepository = Depends(get_task_repo),
    current_user: User = Depends(get_current_user),
):
    return await repo.create(data, creator_id=current_user.id)


@router.get("/tasks/{task_id}", response_model=TaskResponse)
async def get_task(
    task_id: UUID,
    repo: TaskRepository = Depends(get_task_repo),
    current_user: User = Depends(get_current_user),
):
    task = await repo.get_by_id(task_id)
    if not task:
        raise HTTPException(status_code=404, detail="Task not found")
    return task


@router.patch("/tasks/{task_id}", response_model=TaskResponse)
async def update_task(
    task_id: UUID,
    data: TaskUpdate,
    repo: TaskRepository = Depends(get_task_repo),
    current_user: User = Depends(get_current_user),
):
    task = await repo.update(task_id, data)
    if not task:
        raise HTTPException(status_code=404, detail="Task not found")
    return task


@router.delete("/tasks/{task_id}", status_code=status.HTTP_204_NO_CONTENT)
async def delete_task(
    task_id: UUID,
    repo: TaskRepository = Depends(get_task_repo),
    current_user: User = Depends(get_current_user),
):
    deleted = await repo.delete(task_id)
    if not deleted:
        raise HTTPException(status_code=404, detail="Task not found")

Dependency Injection Patterns

FastAPI's dependency system is powerful but has patterns that AI often gets wrong without explicit instruction. Specify your dependency patterns clearly:

You

Write a FastAPI dependency that checks whether the current user has a specific permission. The permission system works as follows:

class Permission(str, Enum):
    READ_TASKS = "tasks:read"
    WRITE_TASKS = "tasks:write"
    ADMIN = "admin"

# User model has: user.permissions: list[Permission]

The dependency should be a factory that returns a dependency function — so I can use it like: Depends(require_permission(Permission.WRITE_TASKS)). Raise 403 if the user lacks the permission. Use get_current_user internally.

from functools import lru_cache
from fastapi import Depends, HTTPException, status
from app.dependencies import get_current_user
from app.models.user import User
from app.schemas.permission import Permission


def require_permission(permission: Permission):
    """Factory that returns a dependency checking for a specific permission."""
    async def _check(current_user: User = Depends(get_current_user)) -> User:
        if permission not in current_user.permissions:
            raise HTTPException(
                status_code=status.HTTP_403_FORBIDDEN,
                detail=f"Permission required: {permission.value}",
            )
        return current_user
    return _check


# Usage in routes:
# @router.post("/tasks", dependencies=[Depends(require_permission(Permission.WRITE_TASKS))])
# or
# current_user: User = Depends(require_permission(Permission.WRITE_TASKS))

Background Tasks and Middleware

You

Add request logging middleware to our FastAPI app. Log: method, path, status code, and response time. Use our existing logger (from app.core.logging import logger). Exclude the /health and /metrics paths. Use BaseHTTPMiddleware.


Part 4: SQLAlchemy & Databases

SQLAlchemy 2.x with async support is where Python AI generation requires the most guidance. The API changed significantly between 1.x and 2.x, async patterns are distinct from sync patterns, and the repository pattern requires setup that AI won't know about unless you show it.

Always Specify the SQLAlchemy Version

You

Write a SQLAlchemy 2.x async model for a Task entity. Requirements:

  • Use the new SQLAlchemy 2.x mapped_column and Mapped type annotations (not the old Column() style)
  • UUID primary key, auto-generated
  • Foreign key to projects.id and optional foreign key to users.id (assignee)
  • Status as a Python Enum stored as a string in the database
  • Timestamps: created_at (auto-set), updated_at (auto-updated)
  • Relationship to Project (many-to-one) and User (many-to-one, nullable)
from sqlalchemy import String, ForeignKey, func
from sqlalchemy.orm import Mapped, mapped_column, relationship
from sqlalchemy.dialects.postgresql import UUID as PG_UUID
from datetime import datetime
from uuid import UUID, uuid4
import enum

from app.db.base import Base


class TaskStatus(str, enum.Enum):
    TODO = "todo"
    IN_PROGRESS = "in-progress"
    REVIEW = "review"
    DONE = "done"


class Task(Base):
    __tablename__ = "tasks"

    id: Mapped[UUID] = mapped_column(
        PG_UUID(as_uuid=True), primary_key=True, default=uuid4
    )
    title: Mapped[str] = mapped_column(String(200), nullable=False)
    status: Mapped[TaskStatus] = mapped_column(
        String(20), default=TaskStatus.TODO, nullable=False
    )
    project_id: Mapped[UUID] = mapped_column(
        PG_UUID(as_uuid=True), ForeignKey("projects.id"), nullable=False
    )
    assignee_id: Mapped[UUID | None] = mapped_column(
        PG_UUID(as_uuid=True), ForeignKey("users.id"), nullable=True
    )
    created_at: Mapped[datetime] = mapped_column(
        server_default=func.now(), nullable=False
    )
    updated_at: Mapped[datetime] = mapped_column(
        server_default=func.now(), onupdate=func.now(), nullable=False
    )

    # Relationships
    project: Mapped["Project"] = relationship("Project", back_populates="tasks")
    assignee: Mapped["User | None"] = relationship("User", foreign_keys=[assignee_id])

Async Repository Pattern

You

Write an async repository base class and a TaskRepository that extends it. Use SQLAlchemy 2.x async session (AsyncSession). The base class should provide get_by_id, create, update, and delete. TaskRepository adds list_by_project with pagination and list_by_assignee. Use select() statements, not the legacy session.query() API.

session.query() is Legacy — Say So Explicitly

SQLAlchemy's old session.query(Model) API is heavily represented in AI training data. If you don't specify "use select() statements" and "SQLAlchemy 2.x", you'll frequently get legacy-style queries that still work but are discouraged and inconsistent with modern async patterns.

Alembic Migration Prompts

You

Write an Alembic migration to add a priority column to the tasks table. The column is a VARCHAR(20), NOT NULL, default value 'medium'. The migration needs to be safe to run on a live table — use a two-step approach: add nullable first, backfill, then set NOT NULL. Write both upgrade() and downgrade().

"""add priority to tasks

Revision ID: a1b2c3d4e5f6
Revises: previous_revision
Create Date: 2026-04-14
"""
from alembic import op
import sqlalchemy as sa


def upgrade() -> None:
    # Step 1: Add column as nullable
    op.add_column("tasks", sa.Column("priority", sa.String(20), nullable=True))

    # Step 2: Backfill existing rows
    op.execute("UPDATE tasks SET priority = 'medium' WHERE priority IS NULL")

    # Step 3: Set NOT NULL constraint now that all rows have a value
    op.alter_column("tasks", "priority", nullable=False)


def downgrade() -> None:
    op.drop_column("tasks", "priority")

Part 5: pytest Patterns

pytest with FastAPI and async SQLAlchemy has a specific setup that AI needs to know about before generating tests. Show it the conftest structure first.

Establishing the Test Context

You

Here is our test conftest.py. Use this structure when writing any tests:

# tests/conftest.py
import pytest
import pytest_asyncio
from httpx import AsyncClient, ASGITransport
from sqlalchemy.ext.asyncio import create_async_engine, AsyncSession
from sqlalchemy.orm import sessionmaker

from app.main import app
from app.db.base import Base
from app.dependencies import get_db

TEST_DATABASE_URL = "postgresql+asyncpg://test:test@localhost:5432/testdb"

@pytest_asyncio.fixture(scope="session")
async def engine():
    engine = create_async_engine(TEST_DATABASE_URL)
    async with engine.begin() as conn:
        await conn.run_sync(Base.metadata.create_all)
    yield engine
    async with engine.begin() as conn:
        await conn.run_sync(Base.metadata.drop_all)
    await engine.dispose()

@pytest_asyncio.fixture
async def db_session(engine):
    async_session = sessionmaker(engine, class_=AsyncSession, expire_on_commit=False)
    async with async_session() as session:
        yield session
        await session.rollback()

@pytest_asyncio.fixture
async def client(db_session):
    app.dependency_overrides[get_db] = lambda: db_session
    async with AsyncClient(transport=ASGITransport(app=app), base_url="http://test") as client:
        yield client
    app.dependency_overrides.clear()

Write integration tests for the task creation endpoint: POST /api/v1/projects/{project_id}/tasks. Cover: successful creation, missing required fields, unauthenticated request, project not found.

import pytest
from uuid import UUID, uuid4
from httpx import AsyncClient


pytestmark = pytest.mark.anyio


async def test_create_task_success(client: AsyncClient, auth_headers: dict, project_id: UUID):
    response = await client.post(
        f"/api/v1/projects/{project_id}/tasks",
        json={"title": "Write unit tests", "priority": "high"},
        headers=auth_headers,
    )
    assert response.status_code == 201
    data = response.json()
    assert data["title"] == "Write unit tests"
    assert data["priority"] == "high"
    assert data["status"] == "todo"   # default
    assert "id" in data
    assert "created_at" in data


async def test_create_task_missing_title(client: AsyncClient, auth_headers: dict, project_id: UUID):
    response = await client.post(
        f"/api/v1/projects/{project_id}/tasks",
        json={"priority": "high"},   # title is required
        headers=auth_headers,
    )
    assert response.status_code == 422
    errors = response.json()["detail"]
    assert any(e["loc"] == ["body", "title"] for e in errors)


async def test_create_task_unauthenticated(client: AsyncClient, project_id: UUID):
    response = await client.post(
        f"/api/v1/projects/{project_id}/tasks",
        json={"title": "Should fail"},
        # no auth headers
    )
    assert response.status_code == 401


async def test_create_task_project_not_found(client: AsyncClient, auth_headers: dict):
    nonexistent_id = uuid4()
    response = await client.post(
        f"/api/v1/projects/{nonexistent_id}/tasks",
        json={"title": "Should fail"},
        headers=auth_headers,
    )
    assert response.status_code == 404

Unit Testing Service and Repository Logic

You

Write unit tests for this service function:

async def assign_task(
    task_id: UUID,
    assignee_id: UUID,
    current_user: User,
    task_repo: TaskRepository,
    user_repo: UserRepository,
) -> Task:
    task = await task_repo.get_by_id(task_id)
    if not task:
        raise TaskNotFound(task_id)
    assignee = await user_repo.get_by_id(assignee_id)
    if not assignee:
        raise UserNotFound(assignee_id)
    if not assignee.is_active:
        raise ValueError("Cannot assign task to inactive user")
    return await task_repo.update(task_id, TaskUpdate(assignee_id=assignee_id))

Mock the repositories using unittest.mock.AsyncMock. Cover: success, task not found, user not found, inactive user. Don't use any database.

Use AsyncMock for Async Functions

Regular MagicMock doesn't work for async def functions — you need AsyncMock. This is the most common source of confusing test failures when AI generates mocks for async Python code. If AI generates MagicMock for an async repository, correct it to AsyncMock explicitly: "use unittest.mock.AsyncMock for all async methods."


Part 6: Django Patterns

Django's convention-over-configuration approach means AI needs to know your project's structure before it can generate code that fits. The patterns are different enough from FastAPI that separate prompting guidance is warranted.

Establishing the Django Context

# Django Project Context

## Stack
- Django 5.x, Python 3.12
- Django REST Framework (DRF) for API endpoints
- PostgreSQL with psycopg3
- celery + redis for background tasks
- pytest-django for testing

## Structure
- apps/ — Django applications (one per domain)
- apps/users/, apps/projects/, apps/tasks/
- Each app has: models.py, serializers.py, views.py, urls.py, tests/

## Patterns
- Class-based views using DRF generics (ListCreateAPIView, etc.)
- ModelSerializer for all serializers — avoid Serializer from scratch
- Custom permissions in apps/core/permissions.py
- Use select_related() and prefetch_related() for any queryset touching relations
- All querysets in manager methods, not in views

## Patterns to avoid
- Do NOT use function-based views for new endpoints
- Do NOT put query logic in views — use manager methods
- Do NOT use Django's built-in User directly — use AUTH_USER_MODEL
- Do NOT use transaction.atomic() in serializers — only in views or service layer

DRF Serializer and View Prompts

You

Write a DRF serializer and view for creating and listing tasks. Here's the model:

class Task(models.Model):
    title = models.CharField(max_length=200)
    status = models.CharField(max_length=20, choices=TaskStatus.choices, default=TaskStatus.TODO)
    priority = models.CharField(max_length=20, choices=Priority.choices, default=Priority.MEDIUM)
    project = models.ForeignKey("Project", on_delete=models.CASCADE, related_name="tasks")
    assignee = models.ForeignKey(settings.AUTH_USER_MODEL, null=True, blank=True,
                                  on_delete=models.SET_NULL, related_name="assigned_tasks")
    created_by = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE,
                                    related_name="created_tasks")
    created_at = models.DateTimeField(auto_now_add=True)

Requirements: TaskSerializer for reads (include creator's name), TaskCreateSerializer for writes (set created_by from request.user automatically). View: list tasks for a project, create a task. Require authentication. Use DRF generics.

Django ORM Queryset Prompts

You

Write a Django ORM queryset method for this use case: get all tasks for a project where the assignee is active, ordered by priority then by created_at, with assignee data pre-fetched to avoid N+1 queries. The priority order should be: urgent first, then high, medium, low.

from django.db import models
from django.db.models import Case, When, Value, IntegerField


class TaskQuerySet(models.QuerySet):
    def for_project(self, project_id):
        return self.filter(project_id=project_id)

    def with_active_assignees(self):
        return self.filter(
            models.Q(assignee__isnull=True) | models.Q(assignee__is_active=True)
        )

    def ordered_by_priority(self):
        priority_order = Case(
            When(priority="urgent", then=Value(0)),
            When(priority="high", then=Value(1)),
            When(priority="medium", then=Value(2)),
            When(priority="low", then=Value(3)),
            default=Value(4),
            output_field=IntegerField(),
        )
        return self.annotate(priority_rank=priority_order).order_by(
            "priority_rank", "created_at"
        )

    def with_related(self):
        return self.select_related("assignee", "created_by", "project")


class TaskManager(models.Manager):
    def get_queryset(self):
        return TaskQuerySet(self.model, using=self._db)

    def active_for_project(self, project_id):
        return (
            self.get_queryset()
            .for_project(project_id)
            .with_active_assignees()
            .ordered_by_priority()
            .with_related()
        )

Part 7: Python Prompt Library

Copy-paste prompts for the situations that come up most often. Each prompt is self-contained — paste it directly after setting up your project context.

New API Endpoint (FastAPI)

I need a new FastAPI endpoint. Here is the context:

Pydantic models:
[PASTE RequestModel and ResponseModel]

Existing repository:
[PASTE repository class signature with method signatures]

Dependencies available:
- get_db() -> AsyncSession
- get_current_user() -> User

Endpoint spec:
- Method: [GET/POST/PATCH/DELETE]
- Path: [e.g. /api/v1/projects/{project_id}/items]
- Auth: required / optional / none
- Logic: [describe what the endpoint should do]
- Error cases: [404 if not found / 403 if not authorized / etc.]

Requirements:
- Use Depends() for all dependencies
- Return 201 for POST, 204 for DELETE
- Use response_model for type safety
- Add the route to an APIRouter, not directly to app

SQLAlchemy Async Repository Method

Write an async SQLAlchemy 2.x repository method. Context:

Model: [PASTE model class]
Session type: AsyncSession
Use select() statements — NOT session.query()

Method needed: [describe what it should query]
Return type: [single model / list / None]
Filters: [describe filter conditions]
Ordering: [describe ordering]
Relations to load: [list any relationships to eager-load]
Performance note: [any N+1 concerns to address]

pytest Integration Test

Write pytest integration tests for this FastAPI endpoint.

Conftest setup available:
- client: AsyncClient (with DB session override)
- auth_headers: dict (authenticated user headers)
- db_session: AsyncSession

Endpoint: [METHOD] [PATH]
Handler code: [PASTE the route handler]

Test cases to cover:
1. Successful case — assert status code and response shape
2. Authentication required — test without auth_headers
3. [Add specific error cases: not found, invalid input, permission denied]
4. Edge cases: [empty list, null fields, boundary values]

Use: pytestmark = pytest.mark.anyio
Use AsyncMock for any mocked dependencies.

Pydantic Model Set

Write Pydantic v2 models for this entity:

Entity: [describe the domain object]
Fields: [list fields with types and constraints]

Generate:
1. Base model (shared fields)
2. CreateModel (for POST requests — exclude auto-generated fields)
3. UpdateModel (for PATCH — all fields Optional)
4. ResponseModel (for API responses — add model_config = {"from_attributes": True})
5. Any field validators needed for the constraints listed

Use Python 3.10+ syntax (X | None, not Optional[X]).
Use Pydantic v2 field_validator syntax, not v1 @validator.
After generating, list any design decisions I might want to change.

Django Model + Migration

Write a Django model and initial migration for this entity.

Entity: [describe]
Fields: [list fields with types, constraints, and relations]

Requirements:
- Use settings.AUTH_USER_MODEL for any user relations (not User directly)
- Add a custom Manager with a QuerySet class
- Include __str__ method
- Add appropriate indexes for the query patterns: [describe queries]
- Migration must be safe for a live database — use two-step for NOT NULL additions
- Include both the model and the Django migration file

Debugging an Async Error

I'm getting this error in my async FastAPI/SQLAlchemy code:

Error: [PASTE full traceback]

Relevant code:
[PASTE the function where it fails]

Context:
- SQLAlchemy version: 2.x
- Using AsyncSession with asyncpg
- [any other relevant setup]

What is causing this error and how do I fix it?
If it's a common async pitfall (like accessing a lazy-loaded relationship outside
a session, or mixing sync and async code), explain why it happens.

Prompt Engineering for Python — Summary

Related Guides

Prompt Engineering for TypeScript/React

The same recipe-book approach for TypeScript and React: type-first prompting, component patterns, state management, API routes.

AI Prompt Library

53 prompts across all categories, language-agnostic: debugging, code review, testing, refactoring, documentation.

Testing with AI

Deeper coverage of AI-assisted test generation: TDD workflows, edge case generation, and mocking strategies.

Back to Home