Real API Integration Tests
This directory contains comprehensive integration tests that use real API credentials from your .env file to test the actual functionality of OpenAI and AWS Bedrock services.
⚠️ IMPORTANT SAFETY NOTICE
These tests make REAL API calls and incur costs!
The tests are designed with safety in mind:
- 🛡️ Protected by
real_apimarker: Tests only run when explicitly requested - 💰 Cost warnings: Interactive prompts before running costly tests
- 📊 Minimal usage: Designed to use minimal tokens to reduce costs
- 🚫 No accidental runs: Regular test runs will skip these tests
Overview
The test_real_api_integration.py file contains tests that:
- Use real OpenAI API keys to test chat completions (streaming and non-streaming)
- Use real AWS Bedrock credentials to test Claude and Titan models
- Compare responses between different providers
- Test configuration validation and error handling
- Test performance characteristics and token usage tracking
Prerequisites
- Environment Configuration: Ensure your
.envfile is properly configured with API credentials:
# OpenAI Configuration
OPENAI_API_KEY="sk-proj-..."
# AWS Bedrock Configuration
AWS_ACCESS_KEY_ID="AKIA..."
AWS_SECRET_ACCESS_KEY="..."
AWS_SESSION_TOKEN="..." # If using temporary credentials
AWS_REGION="us-east-1"
- Dependencies: Make sure all required packages are installed:
pip install pytest pytest-asyncio pytest-cov
Running the Tests
🚨 Safety First: Understanding the Markers
real_apimarker: Tests that make actual API calls and cost money- No marker: Safe configuration tests that don’t make API calls
# Safe: Run only configuration tests (NO API CALLS)
pytest tests/test_real_api_integration.py -k "not real_api"
# COSTS MONEY: Run real API tests (requires explicit marker)
pytest tests/test_real_api_integration.py -m real_api
Using the Test Runner Script (Recommended)
The test runner includes safety prompts and cost warnings:
# Run quick smoke tests (includes cost warning)
python run_real_api_tests.py
# Skip the cost confirmation prompt
python run_real_api_tests.py --yes
# Run all real API tests
python run_real_api_tests.py --mode all
# Run only OpenAI tests
python run_real_api_tests.py --mode openai
# Run only Bedrock tests
python run_real_api_tests.py --mode bedrock
# Run ONLY configuration tests (NO API CALLS, NO COSTS)
python run_real_api_tests.py --mode config
# Run with verbose output
python run_real_api_tests.py --verbose
# Stop on first failure
python run_real_api_tests.py --failfast
Using pytest directly
⚠️ These commands make real API calls and cost money!
# Run all real API tests (COSTS MONEY)
pytest tests/test_real_api_integration.py -m real_api -v
# Run only OpenAI tests (COSTS MONEY)
pytest tests/test_real_api_integration.py -m real_api -k "TestRealOpenAI" -v
# Run only Bedrock tests (COSTS MONEY)
pytest tests/test_real_api_integration.py -m real_api -k "TestRealBedrock" -v
# Run SAFE configuration tests only (NO API CALLS)
pytest tests/test_real_api_integration.py -k "not real_api" -v
# Run with logging (COSTS MONEY)
pytest tests/test_real_api_integration.py -m real_api --log-cli-level=INFO -s
Test Categories
💰 TestRealOpenAIIntegration (Costs Money)
- test_openai_chat_completion_basic: Basic chat completion functionality
- test_openai_streaming_chat_completion: Streaming response handling
- test_openai_multiple_models: Testing different OpenAI models
💰 TestRealBedrockIntegration (Costs Money)
- test_bedrock_claude_chat_completion: Claude chat completion
- test_bedrock_claude_streaming: Claude streaming responses
- test_bedrock_titan_chat_completion: Titan text generation
- test_bedrock_multiple_models: Testing different Bedrock models
💰 TestRealAPIComparison (Costs Money)
- test_compare_openai_vs_bedrock: Side-by-side comparison of providers
🆓 TestConfigurationValidation (Free)
- test_env_variables_loaded: Verify environment configuration (no API calls)
- test_factory_model_resolution: Test model resolution logic (no API calls)
- test_error_handling_invalid_model: Error handling validation (costs money)
💰 TestPerformanceAndLimits (Costs Money)
- test_concurrent_requests: Concurrent API request handling
- test_token_usage_tracking: Token usage and billing tracking
Test Markers
The tests use pytest markers to ensure safety:
@pytest.mark.real_api: REQUIRED for tests that make real API calls@pytest.mark.skipif(not OPENAI_AVAILABLE): Skip if OpenAI not configured@pytest.mark.skipif(not AWS_AVAILABLE): Skip if AWS not configured
Expected Behavior
Successful Test Run
When tests pass, you should see:
- ✅ API credentials validated
- 💰 Cost warnings (for real API tests)
- 📋 Test execution with detailed logging
- 🔍 Response content and usage metrics
- ✅ All assertions passing
Skipped Tests
Tests will be automatically skipped if:
- Required API credentials are not configured
- You don’t use the
real_apimarker (for safety) - Specific models are not available in your region
- Rate limits are encountered (for some tests)
Cost Considerations
💰 API Usage Costs:
- OpenAI: Typically a few cents per test run
- AWS Bedrock: Varies by model and region
- Quick mode: ~$0.01-0.02 per run
- Full test suite: ~$0.05-0.10 per run
Cost Minimization Features:
- Use the quick test mode for regular validation
- Interactive cost confirmations before running expensive tests
- Minimal token usage in all test prompts
- Configuration-only tests that make no API calls
Safety Features
1. Marker Protection
# This will NOT run real API tests (safe)
pytest tests/test_real_api_integration.py
# This WILL run real API tests (costs money)
pytest tests/test_real_api_integration.py -m real_api
2. Interactive Cost Warnings
The test runner will prompt before running costly tests:
⚠️ COST WARNING:
These tests make REAL API calls that will incur costs!
Estimated cost per run:
• Quick mode: ~$0.01-0.02
• Full test suite: ~$0.05-0.10
Continue? [y/N]:
3. Configuration-Only Mode
# Run ONLY configuration tests (zero API calls)
python run_real_api_tests.py --mode config
Troubleshooting
Common Issues
- Tests Not Running
No tests ran matching the given pattern
- Solution: You need to use
-m real_apito run the real API tests - Safe alternative: Use
--mode configfor configuration-only tests
- Authentication Errors
ConfigurationError: API key not configured
- Verify your
.envfile contains valid credentials - Check that keys are not expired or revoked
- Model Not Available
ModelNotFoundError: Model not supported in region
- Some Bedrock models are region-specific
- Update the test model IDs for your region
- Rate Limiting
RateLimitError: Too many requests
- Add delays between tests if needed
- Use smaller batch sizes for concurrent tests
Debugging
To debug test failures:
- Test configuration safely:
python run_real_api_tests.py --mode config
- Enable verbose logging:
python run_real_api_tests.py --verbose --yes
- Run individual test methods:
pytest tests/test_real_api_integration.py::TestRealOpenAIIntegration::test_openai_chat_completion_basic -m real_api -v -s
Integration with CI/CD
Recommended CI/CD Setup
For automated testing, consider these safety measures:
- Manual Triggers Only: Never run real API tests on every commit
- Separate API Keys: Use dedicated testing API keys with spending limits
- Cost Monitoring: Set up billing alerts
- Conditional Execution: Only run on specific branches
Example GitHub Actions configuration:
- name: Run Configuration Tests (Safe)
run: python run_real_api_tests.py --mode config
# This runs on every push - no API calls
- name: Run Real API Tests (Costs Money)
env:
OPENAI_API_KEY: $
AWS_ACCESS_KEY_ID: $
AWS_SECRET_ACCESS_KEY: $
run: python run_real_api_tests.py --mode quick --yes
if: github.event_name == 'workflow_dispatch' # Manual trigger only
Contributing
When adding new tests:
- Always use the
real_apimarker for tests that make API calls - Test configuration separately without the marker for free validation
- Include proper assertions for response validation
- Add logging for debugging and monitoring
- Consider cost implications of new API calls
- Test both success and failure scenarios
- Use minimal tokens to keep costs low
Quick Start Examples
Safe Configuration Check
# Test your setup without any API calls (FREE)
python run_real_api_tests.py --mode config
Quick Validation
# Test that your APIs work with minimal cost (~$0.01)
python run_real_api_tests.py --mode quick
Direct Script Execution
# Run basic validation directly (costs money)
PYTHONPATH=. python tests/test_real_api_integration.py
Remember: Always check your API usage dashboards after running real API tests to monitor costs!