Skip to main content
This example demonstrates how to create and use an Upsonic Agent powered by Groq’s ultra-fast inference to perform comprehensive code reviews. The example showcases structured output with Pydantic schemas, web search integration for best practices, and Groq’s speed advantages for developer productivity.

Overview

Upsonic framework provides seamless integration with Groq models. This example showcases:
  1. Groq Integration — Using Groq’s fast LLaMA models for code analysis
  2. Structured Output — Pydantic schemas for typed, validated responses
  3. Web Search — DuckDuckGo integration for current best practices
  4. Security Analysis — Detection of common vulnerabilities (OWASP categories)
  5. Performance Review — Algorithmic complexity and optimization suggestions
  6. FastAPI Server — Running the agent as a production-ready API server
The agent analyzes code across multiple dimensions:
  • Security — SQL injection, XSS, insecure patterns
  • Performance — Complexity issues, memory concerns, optimizations
  • Quality — Readability, maintainability, documentation
  • Best Practices — Industry standards, design patterns

Project Structure

groq_code_review_agent/
├── main.py                    # Entry point with main() and amain()
├── agent.py                   # Agent creation with Groq configuration
├── schemas.py                 # Pydantic output schemas
├── task_builder.py            # Task description builder
├── upsonic_configs.json       # Upsonic CLI configuration
└── README.md                  # Quick start guide

Environment Variables

Configure the Groq model using environment variables:
# Required: Set Groq API key
export GROQ_API_KEY="your-groq-api-key"

Installation

# Install dependencies from upsonic_configs.json
upsonic install

Managing Dependencies

# Add a package
upsonic add <package> <section>
upsonic add pandas api

# Remove a package
upsonic remove <package> <section>
upsonic remove streamlit api
Sections: api, streamlit, development

Usage

Option 1: Run Directly

python3 main.py
Runs the agent with default test inputs (Python code with security issues).

Option 2: Run as API Server

upsonic run
Server starts at http://localhost:8000. API documentation at /docs. Example API call:
curl -X POST http://localhost:8000/call \
  -H "Content-Type: application/json" \
  -d '{
    "code": "def get_user(id):\n    return db.query(\"SELECT * FROM users WHERE id=\" + id)",
    "language": "python",
    "focus_areas": ["security"]
  }'

How It Works

ComponentDescription
Groq ModelLLaMA 3.3 70B for comprehensive analysis or 8B for speed
Structured OutputPydantic schemas ensure consistent, typed responses
Web SearchDuckDuckGo for current best practices and security advisories
Security AnalysisOWASP-aligned vulnerability detection
Performance ReviewComplexity analysis and optimization suggestions

Example Output

When you run the agent, you’ll see a beautifully formatted structured output:
================================================================================
🔍 CODE REVIEW COMPLETED SUCCESSFULLY
================================================================================

📋 Language: python
🎯 Focus Areas: security, performance, best_practices
⭐ Overall Rating: NEEDS_IMPROVEMENT

--------------------------------------------------------------------------------
📝 SUMMARY
--------------------------------------------------------------------------------
The code has several issues, including a critical SQL injection vulnerability,
high-severity input validation issue, and medium-severity performance issue.

--------------------------------------------------------------------------------
🚨 ISSUES FOUND (4)
--------------------------------------------------------------------------------

1. 🔴 [CRITICAL] SQL Injection Vulnerability
   Category: security
   Location: query = "SELECT * FROM users WHERE name = '" + user_input + "'"
   Suggestion: Use parameterized queries
   Example: query = "SELECT * FROM users WHERE name = ?"

2. 🟠 [HIGH] Input Validation
   Category: security
   Suggestion: Validate user input

3. 🟡 [MEDIUM] Inefficient User Lookup
   Category: performance
   Suggestion: Use a dictionary for user lookup

--------------------------------------------------------------------------------
🔒 SECURITY ANALYSIS
--------------------------------------------------------------------------------
   Risk Level: HIGH
   Vulnerabilities Found: 1
   OWASP Categories: A03:2021-Injection
   Recommendations:
     • Use parameterized queries
     • Validate user input

--------------------------------------------------------------------------------
⚡ PERFORMANCE ANALYSIS
--------------------------------------------------------------------------------
   Complexity Issues:
     • Inefficient user lookup
   Optimization Opportunities:
     • Use a dictionary for user lookup

--------------------------------------------------------------------------------
📊 CODE QUALITY METRICS
--------------------------------------------------------------------------------
   Readability: good
   Maintainability: fair
   Documentation: fair
   Test Coverage: Write unit tests for each function

--------------------------------------------------------------------------------
✅ POSITIVE ASPECTS
--------------------------------------------------------------------------------
   • Good naming conventions
   • Clear code structure

--------------------------------------------------------------------------------
🎯 PRIORITY FIXES
--------------------------------------------------------------------------------
   1. SQL injection vulnerability
   2. Input validation
   3. Inefficient user lookup

================================================================================
The CodeReviewOutput Pydantic model provides fully typed access:
report: CodeReviewOutput = result["review_report"]
print(report.overall_rating)  # "needs_improvement"
print(report.security_analysis.risk_level)  # "high"
for issue in report.issues:
    print(f"{issue.severity}: {issue.title}")

Complete Implementation

main.py

from __future__ import annotations

from typing import Dict, Any

from upsonic import Task

try:
    from .agent import create_code_review_agent
    from .task_builder import build_review_task
    from .schemas import CodeReviewOutput
except ImportError:
    from agent import create_code_review_agent
    from task_builder import build_review_task
    from schemas import CodeReviewOutput


def main(inputs: Dict[str, Any]) -> Dict[str, Any]:
    """Main function for code review and best practices analysis."""
    code = inputs.get("code")
    if not code:
        raise ValueError("code is required in inputs")
    
    language = inputs.get("language")
    if not language:
        raise ValueError("language is required in inputs")
    
    focus_areas = inputs.get("focus_areas", [])
    context = inputs.get("context")
    model = inputs.get("model", "groq/llama-3.3-70b-versatile")
    
    agent = create_code_review_agent(model=model)
    
    task_description = build_review_task(
        code=code,
        language=language,
        focus_areas=focus_areas,
        context=context,
    )
    
    # Pass response_format to Task for structured output
    task = Task(task_description, response_format=CodeReviewOutput)
    
    result = agent.do(task)
    
    return {
        "language": language,
        "focus_areas": focus_areas,
        "review_report": result,  # This is a CodeReviewOutput instance
        "review_completed": True,
    }
The result contains a typed CodeReviewOutput object that you can access programmatically:
result = main(inputs)
report: CodeReviewOutput = result["review_report"]

# Access structured fields
print(report.summary)
print(report.overall_rating)

# Iterate over issues
for issue in report.issues:
    if issue.severity == "critical":
        print(f"🔴 {issue.title}: {issue.suggestion}")

# Access nested analysis
print(report.security_analysis.owasp_categories)
print(report.performance_analysis.optimization_opportunities)

agent.py

"""
Code Review Agent creation and configuration.

Creates the main Agent that performs comprehensive code reviews
using Groq's fast inference capabilities with web search for best practices.
"""

from __future__ import annotations

from typing import Optional, List

from upsonic import Agent
from upsonic.tools.common_tools.duckduckgo import duckduckgo_search_tool


def create_code_review_agent(
    model: str = "groq/llama-3.3-70b-versatile",
    tools: Optional[List] = None,
) -> Agent:
    """Create the code review agent with Groq model.
    
    Args:
        model: Groq model identifier for the agent
        tools: Optional list of additional tools
        
    Returns:
        Configured Agent instance for code review
    """
    ddg_search = duckduckgo_search_tool(duckduckgo_client=None, max_results=5)
    
    agent_tools = [ddg_search]
    if tools:
        agent_tools.extend(tools)
    
    agent = Agent(
        model=model,
        name="code-review-agent",
        role="Senior Software Engineer & Code Reviewer",
        goal="Provide comprehensive code reviews with actionable feedback on security, performance, best practices, and code quality",
        system_prompt="""You are an expert senior software engineer with 15+ years of experience 
        in code review and software architecture. Your expertise spans multiple programming languages 
        and you have deep knowledge of:
        
        - Security vulnerabilities and secure coding practices
        - Performance optimization and algorithmic efficiency
        - Design patterns and software architecture
        - Clean code principles and maintainability
        - Testing strategies and code coverage
        - Industry best practices and coding standards
        
        When reviewing code:
        1. Identify potential bugs and logic errors
        2. Detect security vulnerabilities (SQL injection, XSS, buffer overflows, etc.)
        3. Suggest performance improvements
        4. Recommend better design patterns or abstractions
        5. Point out code style and readability issues
        6. Suggest appropriate test cases
        
        Use web search to find current best practices and industry standards when needed.
        
        Always provide:
        - Clear explanation of issues found
        - Severity level (Critical, High, Medium, Low)
        - Specific code suggestions for fixes
        - References to relevant documentation or best practices
        
        Be constructive and educational in your feedback. Help developers understand 
        not just what to fix, but why.""",
        tools=agent_tools,
        tool_call_limit=10,
    )
    
    return agent

schemas.py

"""
Output schemas for code review agent.

Defines structured Pydantic models for type-safe outputs from the
code review analysis.
"""

from __future__ import annotations

from typing import List, Optional, Literal
from pydantic import BaseModel, Field


class CodeIssue(BaseModel):
    """Represents a single code issue found during review."""
    
    severity: Literal["critical", "high", "medium", "low", "info"] = Field(
        description="Severity level of the issue"
    )
    category: str = Field(
        description="Category of the issue (e.g., security, performance, style, bug)"
    )
    line_reference: Optional[str] = Field(
        default=None,
        description="Line number or code section reference"
    )
    title: str = Field(
        description="Brief title describing the issue"
    )
    description: str = Field(
        description="Detailed description of the issue"
    )
    suggestion: str = Field(
        description="Suggested fix or improvement"
    )
    code_example: Optional[str] = Field(
        default=None,
        description="Example of correct/improved code"
    )


class SecurityAnalysis(BaseModel):
    """Security-focused analysis results."""
    
    vulnerabilities_found: int = Field(
        description="Number of security vulnerabilities found"
    )
    risk_level: Literal["critical", "high", "medium", "low", "none"] = Field(
        description="Overall security risk level"
    )
    owasp_categories: List[str] = Field(
        default_factory=list,
        description="Relevant OWASP categories for issues found"
    )
    recommendations: List[str] = Field(
        default_factory=list,
        description="Security recommendations"
    )


class PerformanceAnalysis(BaseModel):
    """Performance-focused analysis results."""
    
    complexity_issues: List[str] = Field(
        default_factory=list,
        description="Algorithmic complexity concerns"
    )
    memory_concerns: List[str] = Field(
        default_factory=list,
        description="Memory usage concerns"
    )
    optimization_opportunities: List[str] = Field(
        default_factory=list,
        description="Potential performance optimizations"
    )


class CodeQualityMetrics(BaseModel):
    """Code quality assessment metrics."""
    
    readability_score: Literal["excellent", "good", "fair", "poor"] = Field(
        description="Code readability assessment"
    )
    maintainability_score: Literal["excellent", "good", "fair", "poor"] = Field(
        description="Code maintainability assessment"
    )
    test_coverage_suggestion: str = Field(
        description="Suggestions for test coverage"
    )
    documentation_quality: Literal["excellent", "good", "fair", "poor", "missing"] = Field(
        description="Documentation quality assessment"
    )


class CodeReviewOutput(BaseModel):
    """Complete code review output."""
    
    summary: str = Field(
        description="Executive summary of the code review"
    )
    overall_rating: Literal["excellent", "good", "needs_improvement", "poor", "critical"] = Field(
        description="Overall code quality rating"
    )
    issues: List[CodeIssue] = Field(
        default_factory=list,
        description="List of all issues found"
    )
    security_analysis: SecurityAnalysis = Field(
        description="Security analysis results"
    )
    performance_analysis: PerformanceAnalysis = Field(
        description="Performance analysis results"
    )
    code_quality: CodeQualityMetrics = Field(
        description="Code quality metrics"
    )
    positive_aspects: List[str] = Field(
        default_factory=list,
        description="Positive aspects of the code"
    )
    priority_fixes: List[str] = Field(
        default_factory=list,
        description="Top priority items to fix, in order"
    )
    learning_resources: List[str] = Field(
        default_factory=list,
        description="Recommended resources for improvement"
    )

upsonic_configs.json

{
    "envinroment_variables": {
        "UPSONIC_WORKERS_AMOUNT": {
            "type": "number",
            "description": "The number of workers for the Upsonic API",
            "default": 1
        },
        "API_WORKERS": {
            "type": "number",
            "description": "The number of workers for the Upsonic API",
            "default": 1
        },
        "RUNNER_CONCURRENCY": {
            "type": "number",
            "description": "The number of runners for the Upsonic API",
            "default": 1
        },
        "GROQ_API_KEY": {
            "type": "string",
            "description": "Groq API key for authentication",
            "required": true
        }
    },
    "machine_spec": {
        "cpu": 1,
        "memory": 2048,
        "storage": 512
    },
    "agent_name": "Groq Code Review & Best Practices Agent",
    "description": "Fast and comprehensive code review agent powered by Groq's ultra-fast inference. Analyzes code for security vulnerabilities, performance issues, best practices, and provides actionable improvement suggestions.",
    "icon": "code",
    "language": "python",
    "streamlit": false,
    "proxy_agent": false,
    "dependencies": {
        "api": [
            "upsonic",
            "upsonic[tools]"
        ],
        "development": [
            "python-dotenv",
            "pytest"
        ]
    },
    "entrypoints": {
        "api_file": "main.py",
        "streamlit_file": "streamlit_app.py"
    },
    "input_schema": {
        "inputs": {
            "code": {
                "type": "string",
                "description": "The code snippet to review (required)",
                "required": true,
                "default": null
            },
            "language": {
                "type": "string",
                "description": "Programming language of the code (e.g., python, javascript, java)",
                "required": true,
                "default": null
            },
            "focus_areas": {
                "type": "array",
                "description": "Optional list of areas to focus on (security, performance, best_practices, style)",
                "required": false,
                "default": []
            },
            "context": {
                "type": "string",
                "description": "Optional context about the codebase or project",
                "required": false,
                "default": null
            },
            "model": {
                "type": "string",
                "description": "Groq model identifier (e.g., groq/llama-3.3-70b-versatile, groq/llama-3.1-8b-instant)",
                "required": false,
                "default": "groq/llama-3.3-70b-versatile"
            }
        }
    },
    "output_schema": {
        "language": {
            "type": "string",
            "description": "The programming language of the reviewed code"
        },
        "focus_areas": {
            "type": "array",
            "description": "The focus areas that were analyzed"
        },
        "review_report": {
            "type": "object",
            "description": "Comprehensive code review report with issues, suggestions, and metrics"
        },
        "review_completed": {
            "type": "boolean",
            "description": "Whether the review was successfully completed"
        }
    }
}

Key Features

Groq’s Ultra-Fast Inference

Groq’s custom LPU (Language Processing Unit) hardware provides:
  • Up to 10x faster inference than GPU-based solutions
  • Low latency variance for production workloads
  • Competitive pricing with high throughput
  • Access to top-tier open-source models

Structured Output with Pydantic

For structured output, pass response_format through the Task:
from upsonic import Task
from schemas import CodeReviewOutput

task = Task(task_description, response_format=CodeReviewOutput)
result = agent.do(task)  # Returns a CodeReviewOutput instance
This ensures:
  • Type-safe, validated responses
  • Consistent output structure
  • Easy integration with downstream systems
  • Automatic schema generation for API documentation

Web Search Integration

The agent uses DuckDuckGo to:
  • Find current security best practices
  • Look up language-specific coding standards
  • Search for recent CVEs and security advisories
  • Discover performance benchmarks and optimization techniques

Specialized Agent Variants

The example includes factory functions for specialized agents:
  • create_security_focused_agent() — Security-only analysis
  • create_performance_focused_agent() — Performance-only analysis
These use the faster llama-3.1-8b-instant model for quicker, focused reviews.

Repository

View the complete example: Groq Code Review Agent