Recently, users on X and Reddit have reported that writing code with Claude 3.5 Sonnet has made them feel “very powerful.” One such post that specifically sparked attention was a tweet from YCombinator CEO, Garry Tan.
Garry shared the following post by one of their YCombinator Reddit community members about how using Claude 3.5 Sonnet increased their productivity by 10-fold while implementing popular features.
The post also highlighted the importance of making sound architectural and infrastructural decisions as a crucial part of their day-to-day LLM use.
Although LLMs like Claude 3.5 offer outstanding benefits, they still have limitations regarding maintaining memory and context. Several development strategies can be used to address these limitations. These strategies revolve around the software development foundations that all developers hone in with experience but are often easy to miss when prompting LLMs in plain English.
Applying the same basic principles developers use daily, what the Reddit author refers to as architectural decisions, to LLM interactions can result in highly modular, scalable, and well-documented code.
The following are some key coding principles and development practices that can be applied to LLM-assisted software development, along with practical examples in Python:
All the prompts in the examples were used with Claude Sonnet 3.5.
To make it easier to describe each logical component to the LLM and get accurate components, break the codebase into small, well-defined components.
# database.py
class Database:
def __init__(self, sql_connection_string):
....
def query(self, sql):
....
# user_service.py
class UserService:
def __init__(self, database):
self.db = database
def get_user(self, user_id):
return self.db.query(f"SELECT * FROM users WHERE id = {user_id}")
# main.py
db = Database("sql_connection_string")
user_service = UserService(db)
user = user_service.get_user(123)
To hide complex implementations, use abstraction layers and focus on higher-level abstractions using the details for each component recorded along the way.
# top-level abstraction
class BankingSystem:
def __init__(self):
self._account_manager = AccountManager()
self._transaction_processor = TransactionProcessor()
def create_account(self, acct_number: str, owner: str) -> None:
self._account_manager.create_account(acct_number, owner)
def process_transaction(self, acct_number: str, transaction_type: str, amount: float) -> None:
account = self._account_manager.get_account(acct_number)
self._transaction_processor.process(account, transaction_type, amount)
# mid-level abstractions
class AccountManager:
def __init__(self):
def create_account(self, acct_number: str, owner: str) -> None:
def get_account(self, acct_number: str) -> 'Account':
class TransactionProcessor:
def process(self, account: 'Account', transaction_type: str, amount: float) -> None:
# lower-level abstractions
class Account(ABC):
....
class Transaction(ABC):
....
# concrete implementations
class SavingsAccount(Account):
....
class CheckingAccount(Account):
....
class DepositTransaction(Transaction):
....
class WithdrawalTransaction(Transaction):
....
# lowest-level abstraction
class TransactionLog:
....
# usage focuses on the high-level abstraction
cart = ShoppingCart()
cart.add_item(Item("Book", 15))
cart.add_item(Item("Pen", 2))
total = cart.get_total()
To make tasks more manageable when seeking help from LLM, define clear interfaces for each component and focus on implementing them all separately.
class PaymentProcessor(ABC):
@abstractmethod
def process_payment(self, amount: float, card_no: str) -> bool:
....
class StripeProcessor(PaymentProcessor):
# stripe specific implementation
def process_payment(self, amount: float, card_no: str) -> bool:
....
class PayPalProcessor(PaymentProcessor):
# paypal specific implementation
def process_payment(self, amount: float, card_no: str) -> bool:
....
To prevent hallucinations and limit the scope, focus on one small piece at a time and ensure each class/function has a single, well-defined responsibility. Develop incrementally to have more control over the code being generated.
class UserManager:
# user creation logic
def create_user(self, username, email):
...
class EmailService:
# send welcome email logic
def send_welcome_email(self, email):
....
class NotificationService:
# send sms notification
def send_sms(self, username, email):
...
# Usage
user_manager = UserManager()
email_svc = EmailService()
user = user_manager.create_user("hacker", "[email protected]")
email_svc.send_welcome_email("[email protected]")
To simplify describing code structure to LLMs and understand their suggestions, use clear and consistent naming rules.
# classes: PascalCase
class UserAccount:
pass
# functions and variables: snake_case
def calculate_total_price(item_price, quantity):
total_cost = item_price * quantity
return total_cost
# constants: UPPERCASE_WITH_UNDERSCORES
MAX_LOGIN_ATTEMPTS = 3
# private methods/variables: prefix with underscore
class DatabaseConnection:
def __init__(self):
self._connection = None
def _connect_to_database(self):
pass
To generate specific implementations based on the requirements, create skeleton code for common code structures, and use them as starter code.
# Todo item - pydantic Model
class TodoItem(BaseModel):
id: Optional[int] = None
title: str
description: Optional[str] = None
completed: bool = False
todo_db = [<insert mock data>]
@app.get("/todos", response_model=List[TodoItem])
async def get_todos():
# TODO: retrieve all todos
@app.get("/todos/{todo_id}", response_model=TodoItem)
async def get_todo(todo_id: int):
# TODO: retrieve specific todo
@app.post("/todos", response_model=TodoItem)
async def create_todo(todo: TodoItem):
# TODO: create todo
@app.put("/todos/{todo_id}", response_model=TodoItem)
async def update_todo(todo_id: int, todo: TodoItem):
# TODO: update todo
@app.delete("/todos/{todo_id}")
async def delete_todo(todo_id: int):
# TODO: delete todo
To help guide the LLM, write concise documentation for each component before implementation. Furthermore, generate docstrings that can guide the prompts in the future using the LLM itself.
prompt:
Use a Documentation Driven Development approach to write clear, concise docstrings for the following python function
calculate_discount(price, discount_percentage)
takes in
- price (float), original price of the item.
- discount_percentage (float), discount percentage range(0-100).
returns
- price after discount (float)
raises
- ValueError, when discount_percentage is out of range
example
- calculate_discount(100, 10) -> 10.0
response:
def calculate_discount(price: float, discount_percent: float) -> float:
"""
Calculate the discounted price of an item.
Args:
price (float): original price for the item.
discount_percentage (float): discount percentage range(0-100).
Returns:
float: The price after applying the discount.
Raises:
ValueError: If discount_percentage is not between 0 and 100.
Example:
>>> calculate_discount(100, 20)
80.0
"""
if not 0 <= discount_percent <= 100:
raise ValueError("Discount percent must be between 0 and 100")
discount_amount = price * (discount_percent / 100)
return price - discount_amount
To help LLMs with implementation and particularly handle edge cases, write tests yourself or use LLMs to do so before implementing the actual function. LLMs don’t internally run code but having tests beforehand to ensure generated code meets the function requirements can be useful in catching bugs in LLM-generated code.
test case:
def is_palindrome(s: str) -> bool:
"""check if string is palindrome."""
return s == s[::-1]
class TestPalindrome(unittest.TestCase):
def test_palindrome(self):
self.assertTrue(is_palindrome("racecar"))
self.assertTrue(is_palindrome("A man a plan a canal Panama"))
self.assertFalse(is_palindrome("hello"))
self.assertTrue(is_palindrome("")) # Edge case: empty string
prompt:
given the test case <insert test>, use Document Driven Development to write a function for TestPalindrome
response:
def is_palindrome(s: str) -> bool:
"""
Check if a string is a palindrome.
A palindrome is a word, phrase, number, or other sequence of characters
that reads the same forward and backward, ignoring spaces, punctuation,
and capitalization.
Args:
s (str): The input string to check.
Returns:
bool: True if the input is a palindrome, False otherwise.
Examples:
>>> is_palindrome("racecar")
True
>>> is_palindrome("A man a plan a canal Panama")
True
>>> is_palindrome("hello")
False
>>> is_palindrome("")
True
"""
# Remove non-alphanumeric characters and convert to lowercase
cleaned = ''.join(char.lower() for char in s if char.isalnum())
# Compare the cleaned string with its reverse
return cleaned == cleaned[::-1]
To discuss multiple implementation details with the LLM, maintain a library of commonly used design patterns for reference.
# Singleton pattern
class DatabaseConnection:
_instance = None
def __new__(cls):
if cls._instance is None:
cls._instance = super().__new__(cls)
# initialize database connection
return cls._instance
# Factory pattern
class AnimalFactory:
@staticmethod
def create_animal(animal_type):
if animal_type == "dog":
return Dog()
elif animal_type == "cat":
return Cat()
else:
raise ValueError("Unknown animal type")
# Observer pattern
class Subject:
def __init__(self):
self._observers = []
def attach(self, observer):
self._observers.append(observer)
def detach(self, observer):
self._observers.remove(observer)
def notify(self):
for observer in self._observers:
observer.update()
# Adapter pattern
class LLMAdapter:
def __init__(self, llm_service):
self.llm_service = llm_service
def generate_code(self, prompt):
llm_response = self.llm_service.complete(prompt)
return self.extract_code(llm_response)
def extract_code(self, response):
pass
To ensure quality and consistency, create a checklist for reviewing LLM-generated code.
# Code Review Checklist
## Functionality
- [ ] Code performs the intended task correctly
- [ ] Edge cases are handled appropriately
## Code Quality
- [ ] Code follows project's style guide
- [ ] Variable and function names are descriptive and consistent
- [ ] No unnecessary comments or dead code
## Performance
- [ ] Code is optimized for efficiency
- [ ] No potential performance bottlenecks
## Security
- [ ] Input validation is implemented
- [ ] Sensitive data is handled securely
## Testing
- [ ] Unit tests are included and pass
- [ ] Edge cases are covered in tests
## Documentation
- [ ] Functions and classes are properly documented
- [ ] Complex logic is explained in comments
LLMs work best with a defined structure, so develop a strategy for breaking coding tasks into smaller prompts. Following an organized approach helps generate working code without prompting the LLM to re-correct the generated code multiple times.
prompt:
I need to implement a function to calculate the Fibonacci number sequence using a Document Driven Development approach.
1. Purpose: function that generates the Fibonacci sequence up to a given number of terms.
2. Interface:
def fibonacci_seq(n: int) -> List[int]:
"""
generate Fibonacci sequence up to n terms.
Args:
n (int): number of terms in the sequence
Returns:
List[int]: fibonacci sequence
"""
3. Key Functionalities:
- handle input validation (n should always be a positive integer)
- generate the sequence starting with 0 and 1
- each subsequent number is the sum of two preceding ones
- return the sequence as a list
4. Implementation Details:
- use a loop to generate the sequence
- store the sequence in a list
- optimize for memory by only keeping the last two numbers in memory if needed
5. Test Cases:
- fibonacci_seq(0) should return []
- fibonacci_seq(1) should return [0]
- fibonacci_seq(5) should return [0, 1, 1, 2, 3]
While all the above examples may seem straightforward, following foundational practices like modular architecture and effective prompt engineering, and taking a solid structured approach in LLM-assisted development, makes a big difference at scale. By implementing these practices, more developers can maximize the benefits of using LLMs, resulting in enhanced productivity and code quality.
LLMs are powerful tools that work best when guided by software engineering principles that are easy to overlook. Internalizing them could be the difference between elegantly crafted code and a randomly generated Big Ball of Mud.
The goal of this article is to encourage developers to always keep these practices in mind while using LLMs to produce high-quality code and save time in the future. As LLMs continue to improve, fundamentals will become even more crucial to getting the best out of them.
For a deeper dive into software development principles, check out this classic textbook: Clean Architecture: A Craftsman's Guide to Software Structure and Design by Robert C. Martin.
If you enjoyed this article, stay tuned for the next one where we dive into a detailed workflow for LLM-assisted development. Please share any other concepts you think are important in the comments! Thank you.