Implementation (Part 2)

In Part 1 of this series, I shared how and why we embraced Test-Driven Development (TDD) at Bynry Inc. to improve the quality and reliability of our Smart360 utility management platform. Now, let's dive into the technical details of our implementation—how we designed our testing framework, the architectural choices we made, and the real-world results we've achieved.

Setting the Technical Foundation

When Gaurav and I set out to build our testing framework, we had several key requirements:

  1. Support for API testing: Since Smart360 is built on microservices, we needed robust API testing capabilities
  2. Easy to write tests: Developers should find the framework intuitive and straightforward
  3. Comprehensive coverage: The framework should help identify untested code paths
  4. Clear feedback: When tests fail, developers should quickly understand why
  5. Maintainable test code: Test code should be as clean and maintainable as production code
  6. CI/CD integration: Tests should run automatically with each code change

After evaluating various options, we chose pytest as our foundation due to its flexibility, extensive plugin ecosystem, and excellent support for fixtures and parametrization.

The Architecture of Our Testing Framework

Our testing framework for Smart360 is organized with a clear separation of concerns:

📦 tests
├─ data/                     # Test data organized by module
│  ├─ meter_data.py          # Meter test data
│  ├─ route_data.py          # Route test data
│  └─ ...
├─ views/                    # Tests for API views
│  ├─ test_meter_view.py
│  ├─ test_bulk_meter_upload_view.py
│  └─ ...
├─ models/                   # Tests for models
├─ utils/                    # Utilities for testing
│  ├─ dummy_wrapper_utils/   # Mock external services
│  │  ├─ auth_wrapper.py
│  │  ├─ onboarding_wrapper.py
│  │  └─ ...
│  ├─ base_test_api_view.py  # Base test class
│  └─ ...
└─ conftest.py               # Global pytest fixtures

This structure reflects our philosophy that test code should be as organized and maintainable as production code. Let's explore each component in detail.

The Heart of Our Framework: BaseTestApiView

At the core of our testing framework is the BaseTestApiView class. This base class provides a rich set of assertions and utilities specifically designed for API testing:

class BaseTestApiView:
    def setup_method(self):
        self.smart360_api_client()

    def assertEqual(self, first, second, msg=None):
        if first != second:
            standard_msg = f"{first!r} != {second!r}"
            self.fail(msg or standard_msg)

    def assertIn(self, member, container, msg=None):
        if member not in container:
            standard_msg = f"{member!r} not found in {container!r}"
            self.fail(msg or standard_msg)

    # ... more assertions ...

    def assert_response_contains_keys(self, response, expected_keys):
        """Assert that a response contains certain keys"""
        data = self._get_response_data(response)
        # ... implementation ...

    def assert_valid_response(self, response_data, schema):
        """Assert that a response is valid according to the expected schema"""
        # ... implementation ...

By inheriting from this base class, test classes get access to a rich set of assertions specifically designed for API testing. These assertions provide clear, detailed error messages when tests fail, making it easy to diagnose issues.

Data-Driven Testing

One of our key innovations was adopting a data-driven approach to testing. Rather than hardcoding test inputs and expected outputs in test methods, we store them in dedicated data files:

# In tests/data/meter_data.py

meter_test_cases = {
    "get_meter_by_id": {
        "request": {
            "method": "GET",
            "url": "/api/mx/meter/{id}/",
            "data": {"remote_utility_id": 10000}
        },
        "response": {
            "status_code": 200,
            "content_type": "application/json",
            "data_checks": {
                "meter_number": "M12345",
                "status": 0
            }
        }
    },
    # More test cases...
}

This separation of test data from test logic offers several benefits:

  1. Readability: Test methods focus on behavior, not data
  2. Maintainability: Changes to expected behavior only require updating data files
  3. Reusability: The same test data can be used across multiple tests
  4. Documentation: Test data files serve as documentation of expected API behavior

Mocking External Services

Smart360's microservice architecture means services often depend on other services. To test in isolation, we created a system of "dummy wrapper utilities" that mimic the behavior of external services:

# In tests/utils/dummy_wrapper_utils/onboarding_wrapper.py
class DummyOnboardingWrapper:
    def get_premise_info(self, unique_values, remote_utility_id, type_json=None):
        """Return predefined mock response based on the type_json"""
        if type_json == "category_json":
            return [
                {
                    'code': 'Category-1',
                    'hierarchy_names': {'category': 'Mock Category Name'},
                    '_': {'name': 'Mock Category Name'}
                }
            ]
        else:
            return [
                {
                    'code': 'Test Premise',
                    'hierarchy_names': {
                        'area': 'Mock Area Name',
                        'sub_area': 'Mock Sub Area Name',
                        'premise': 'Mock Premise Name'
                    }
                }
            ]

Test classes then use these dummy wrappers to mock external service calls:

def setup_service_mocks(self):
    """Setup mocks for all external services"""
    self.onboarding_patcher = patch('meter.utils.wrapper_utils.onboarding_wrapper.OnboardingWrapper')
    self.mock_onboarding = self.onboarding_patcher.start()
    self.mock_onboarding_instance = MagicMock()
    self.mock_onboarding.return_value = self.mock_onboarding_instance

    # Configure responses using dummy wrapper
    self.mock_onboarding_instance.get_premise_info.return_value = \
        dummy_onboarding_wrapper.get_premise_info(["Test Premise"], 10000)

This approach ensures tests are isolated, deterministic, and not dependent on external services being available or behaving consistently.

Schema Validation

For API testing, validating the structure and content of responses is critical. We developed a schema validation system that verifies:

  1. Required fields are present
  2. Fields have the correct types
  3. Status codes map to the appropriate display values
  4. Pagination logic is consistent
meter_detail_schema = {
    "pagination_fields": {
        "count": {"type": int, "required": True},
        "next": {"type": (str, type(None)), "required": True},
        "previous": {"type": (str, type(None)), "required": True},
    },
    "result_key": "results",
    "result_fields": {
        "id": {"type": int, "required": True},
        "meter_number": {"type": str, "required": True},
        "status": {"type": int, "required": True},
        "status_display": {"type": str, "required": True},
    },
    "status_mappings": {
        "status": {
            0: "Unassigned",
            1: "Assigned",
            2: "Expired"
        }
    }
}

Our assert_valid_response method performs comprehensive validation against these schemas, ensuring APIs maintain consistent response formats and adhere to their documented contracts.

Standard Test Structure

We established a consistent pattern for all test classes:

class TestMeterDetailAPIView(BaseTestApiView):
    """Test suite for the MeterDetailAPIView"""

    def setup_method(self):
        """Set up test data and client for each test"""
        super().setup_method()

        # Define URL and utility ID
        self.url = '/api/mx/meter-detail/'
        self.utility_id = 10000

        # Create necessary database objects
        self.setup_database_objects()

        # Setup mocks for external services
        self.setup_service_mocks()

    def setup_database_objects(self):
        """Create all database objects from test data"""
        # Create meters from test data
        self.meters = []
        for data in meter_setup_data["existing_meters"]:
            meter = Meter.objects.create(**data)
            self.meters.append(meter)

    def setup_service_mocks(self):
        """Setup mocks for all external services"""
        # Set up mock for onboarding service
        # ... implementation ...

    def test_get_meter_by_id(self):
        """Test retrieving meter by ID"""
        meter = self.meters[0]
        response = self.client.get(f"{self.url}{meter.id}/", {"remote_utility_id": self.utility_id})

        self.assertEqual(response.status_code, 200)
        self.assertEqual(response.data["id"], meter.id)

This consistent structure makes tests easy to read and maintain. Each test follows the Arrange-Act-Assert pattern:

  1. Arrange: Set up test data and conditions
  2. Act: Perform the action being tested
  3. Assert: Verify expected outcomes

Test Running and Reporting

To make our testing framework easy to use, we created a custom test runner script:

#!/bin/bash
# Script to run tests with comprehensive reporting

# Function to display help
show_help() {
    echo "Usage: $0 [OPTIONS]"
    echo ""
    echo "Test Selection:"
    echo "  -f, --failed           Run only previously failed tests"
    echo "  -k PATTERN             Run tests matching pattern"
    # ... more options ...
}

# Process options
# ... implementation ...

# Run the tests
echo "Running tests..."
eval "$COMMAND"

# Display summary information
echo "Test Execution Summary"
echo "========================"
echo "Duration: $duration_str"
echo "Coverage HTML: tests/reports/coverage/index.html"

This script provides a user-friendly interface for running tests, with options for running specific tests, generating coverage reports, and more.

Understanding Test Coverage

A critical aspect of our testing framework is coverage reporting. Test coverage measures how much of your code is executed during testing, helping identify untested code paths.

We configured pytest-cov to generate detailed coverage reports:

[run]
source = meter
omit = 
    */migrations/*
    */tests/*
    # ... more exclusions ...

[report]
# Show missing lines in report
show_missing = True

# Fail the command if coverage is below this threshold
fail_under = 80

[html]
directory = tests/reports/coverage
title = Smart360 Test Coverage Report

These reports help us identify which parts of the codebase need more testing, focusing our efforts where they'll have the most impact.

Handling Exceptions in Django Views

One subtle but important aspect of our framework is how we test exception handling in Django views. In Django, when an exception occurs in a view, Django catches it and returns an appropriate HTTP response. This means we can't use pytest's raises context manager to test exceptions in views.

Instead, we test the response:

# INCORRECT: Django views catch exceptions and return responses
with pytest.raises(CustomException):  # This will fail
    self.client.get('/your-url/')

# CORRECT: Check the response indicates an error
response = self.client.get('/your-url/')
self.assertEqual(response.status_code, 400)
self.assertIn("error message", response.data["message"])

This approach accurately tests how the API behaves from the client's perspective.

Real-World Results

After implementing our TDD framework, we've seen dramatic improvements across the board:

Code Quality Metrics

  • Test coverage: Increased from <30% to >80%
  • Bug reports: Decreased by 70-80%
  • Production incidents: Reduced by 65%
  • Rework: Decreased by 50%

Developer Experience

  • Onboarding time: New developers become productive faster
  • Collaboration: Improved understanding across services
  • Confidence: Developers make changes with less fear of breaking things
  • Critical thinking: More emphasis on edge cases and error handling

Business Impact

  • Release predictability: More features delivered on time and bug-free
  • Customer satisfaction: Fewer service disruptions
  • Developer retention: Less burnout from firefighting
  • Innovation pace: More time for new features, less time fixing bugs

Lessons Learned

Our TDD journey at Bynry wasn't without challenges. Here are some key lessons we learned:

  1. Start with the right mindset: TDD is as much about thinking differently as it is about writing tests
  2. Invest in infrastructure: A good testing framework pays dividends by making tests easier to write
  3. Make tests easy to read: Tests serve as documentation—they should be clear and understandable
  4. Balance testing and pragmatism: 100% coverage isn't always necessary; focus on critical paths
  5. Lead by example: Nothing convinces skeptics like seeing TDD prevent real bugs

Looking Forward

Our testing framework continues to evolve. We're currently exploring:

  1. Property-based testing: Generating test cases automatically to find edge cases
  2. Performance testing: Identifying performance regressions early
  3. Mutation testing: Verifying the quality of our tests by introducing faults
  4. Contract testing: Formalizing the contracts between microservices

Conclusion

Implementing TDD at Bynry has transformed not just our code but our entire approach to software development. We've moved from reactive firefighting to proactive quality assurance, from hoping things work to knowing they do.

The investment in our testing framework has paid off many times over in reduced bugs, improved developer satisfaction, and more reliable software. For other organizations considering a similar journey, our advice is simple: start today. The cost of not testing is far greater than the cost of testing.

At Bynry, we're proud of the culture of quality we've built. Our testing framework is more than just code—it's a manifestation of our commitment to engineering excellence and delivering the best possible experience to our customers.


I Harsh Jha, Software Developer Engineer at Bynry Inc., spearheaded the implementation of Test-Driven Development across the company's engineering teams. I work alongside Tech Lead Gaurav Dagde to continuously improve the quality and reliability of the Smart360 platform.

to read more please visit

Know more about Bynry: