To learn about the transformative powers of LLMs, get our ebook: Unlocking the power of LLMs. Get Now→

To learn about the transformative powers of LLMs, get our ebook: Unlocking the power of LLMs. Get Now→

Helping every child read with Wadhwani AI

AI solution to assess and improve the reading skills of children in underserved communities

Wadhwani AI is a non-profit organization that works on multiple turnkey AI solutions for underserved populations in developing countries.

Through the Vachan Samiksha project, the team is developing a customized AI solution that teachers in rural India can use to assess the reading fluency of students and develop a personalized contingency plan to improve the reading skills of each student.

The team had deployed the solution, in primary schools in Gujrat, for conducting pilots. However, the team was facing the following issues that needed to be solved before the project’s scope was expanded to more schools and students:

  1. Very high cost of computing: The Vachan Samiksha model needed GPUs to make inferences, and hence the team had to bear very high costs for keeping GPU instances provisioned over the entire duration of the pilot.
  2. Scaling was limited: By the the ML instances quota that the team could get on Sagemaker, for which the process was slow and involved making a business case. Getting non-sagemaker instances on EKS was much easier
  3. Some requests took a lot of time to respond: The pilots were conducted in 1000s of schools, and Millions of students simultaneously. This required the system to scale horizontally when the request throughput increases. However, Sagemaker was taking upwards to 9 minutes before being able to scale, giving a poor experience to the end user

TrueFoundry team partnered with the team to solve these problems. Using the TrueFoundry platform, the team was able to:

  1. Helped the team scale the application to handle 10X Requests per second compared to Sagemaker.
  2. Reduce the cloud cost incurred by ~55% with the same level of reliability and performance.
  3. Reduce the latency of requests by ~80% when the pods were scaling horizontally.

About Wadhwani AI

Wadhwani AI was founded by Romesh and Sunil Wadhwani (Part of the Times100 AI list) to harness AI for building solutions for problems faced by underserved communities in developing nations. They partner with government and global nonprofit bodies from around the world to deliver value through the solution. As a not-for-profit, Wadhwani AI uses artificial intelligence to solve social problems in the fields of agriculture, education, and health among others. Some of their projects include:

  • Pest management for cotton farms: The solution helps reduce crop losses by detecting and controlling pests that affect the cotton plant.
  • TB adherence prediction: Deployed at over 100 public health facilities, it helps identify high-risk patients, detect drug resistance and help in TB diagnosis using ultrasound data.
  • Newborn anthropometry: A solution that measures baby weight using a smartphone camera and tracks growth indicators.
  • COVID-19 forecasting and diagnosis: A solution that predicts the spread of the pandemic and detects COVID-19 infection using cough sounds.

Wadhwani AI also works with partner organizations to assess their AI-readiness, which is their ability to create and use AI solutions effectively and sustainably. Wadhwani AI’s work aims to use AI for good and to improve the lives of billions of people in developing countries.

Wadhwani AI’s Oral Reading Fluency Tool: Vachan Samiksha

Reading skills are fundamental to the educational foundation of any child. Unfortunately, many students belonging to the rural and underprivileged regions of India and other developing nations lack these skills. To solve this problem on a foundational level, the Wadhwani AI team has developed an AI-based Oral Reading Frequency tool called the Vachan Samiksha.

The tool deploys AI to analyze every child’s reading performance. It is mostly targeted towards rural and semi-urban regions of the country at the moment and is being used across age groups. To make the solution generalizable for the majority of the country, the team has built an accent-inclusive model to assess fluency in both the regional and English languages. Manual assessment of these skills have their biases and are often inaccurate.

The solution is served to the users (teachers of target schools) through an app that invokes the model that is deployed on the cloud. The student is made to read a paragraph, which is recorded by the application and sent to the cloud. On the cloud, the model assesses reading accuracy, speed, comprehension, and other complex learning delays that could be missed in a normal evaluation.

Besides assessing these skills, the application also creates a personalized learning plan for each student to facilitate their learning and also creates demographical reports for macro-level actions by the government authorities.

The team had deployed the model for the pilot with Sagemaker

When we started our collaboration with the Vachan Samiksha team within Wadhwani AI, the team had been leveraging the native AWS MLOps stack for deploying the model for its pilot with the Education Department of Gujarat.

The team was leveraging Sagemaker for its deployment. Their infrastructure setup was as follows:

  1. Sagemaker Async Endpoint: The team wanted an asynchronous inference engine since the model could take some time (~5-7 seconds) for the model to infer. When the application got a lot of traffic at the same time, it needed to store the requests intermittently before a worker could pick it up and infer on it. Sagemaker’s async endpoint internally makes use of its native queue.
  2. AWS ECS: The team was using AWS ECS to host the backend service for the application.
  3. Sagemaker queue workers: Sagemaker makes use of ML instances for queue workers that pick up requests from the queue and infer on them.
  4. S3 Data Source: The Queue was being written to an S3 data source and read from it
  5. SNS: it was used as the broker to publish the output path and the success/failure messages from the output SQS queue

Vachan Samiksha Team's Architecture on Sagemaker

Challenges that the team had been facing

The team faced challenges with this setup while trying to conduct the first pilot which motivated them to try out other solutions:

Scaling was limited

The pilot was anticipated to run at a huge scale (~6 Million students in a month). However, the team did not have confidence that Sagemaker would be able to support this scale because:

  1. Separate Quota: Sagemkaer has a separate quota and allocation for ML instances that can be used with Sagemaker.
  2. Difficult to get ML Instance Quota: To get extra quota is a slow process and the team needed to make a business case to be able to be eligible for more quota. Even when the team was allocated more quota, it was barely 1/10th of the quota that the team expected.
  3. Getting non-ML Instances is much easier: The team found it much easier to get quota for non-ML instances on EKS. However, it was difficult for the team to use it in their pilot without making use of Sagemaker.

Support was slow

During the pilot, the team faced issues with the speed of scaling and some pods were not coming up as expected. However, to get the issue resolved, the team contacted Sagemaker representatives who then contacted the technical team. This induced a delay in the system and caused a delay in the pilot.

Scaling was slow

When request traffic increased during the pilot, the pods were required to scale horizontally (Spin up new nodes that can pick up and process some of the requests from the queue). This process took ~9-10 minutes for each new pod that was spun up, resulting in delayed responses and a poor experience for the end user.

Unsustainably high costs

GPU instances are very expensive due to the global shortage of chips. Add on top of this the 20-40% markup for ML instances that Sagemaker puts. This made the cost of the instances very high and infeasible for the team at the scale that they wanted to run the project.

The system was ready for deployment with TrueFoundry in less than a week

When we met the Vachan Samiksha team, they were in the period between their first pilot and the second. The pilot was less than a week away and we had to:

  1. Set up the TrueFoundry platform on their AWS Infrastructure (Since the data is very sensitive and no data was allowed to go beyond the project’s VPC)
  2. Onboard the team and walk them through the different functionalities of the platform.
  3. Migrate the Vachan Samiksha application to the platform
  4. Load testing the application and benchmark the horizontal scaling

Pilot was ready to be shipped with TrueFoundry in <1 Week

During the time before the pilot:

Platform Installation

Our team helped the Wadhwan AI Team install the platform on their own EKS account. The control plane and the workload cluster, both were installed on their own infrastructure. All of the Data, UI elements to interact with the platform, and the workload processes for training/deploying the models remained within their own VPC. The platform also complied with all the security rules and practices of the company.

Training and Onboarding

During the training and onboarding process, we helped the team understand how the different components of the platform interact with each other. We walked them through how to set up resources, configure autoscaling, and deploy the model.


The Wadhwani AI team was able to migrate the application on its own with minimal help from the TrueFoundry team. This was done in a 1-hour call with the team.


After the application was deployed, the team started doing the production-level load testing on it. The team independently scaled up the application to more than 100 nodes through a simple argument on TrueFoundry UI which is 5X their previous highest achievable scale. They also tried benchmarking the speed of node scaling, which was much (3-4 X) faster than Sagemaker.


With the load tests done, the team deployed the pilot application and was prepped for rolling it out in the second phase of the pilot which was rolled out to 1000 schools, 9000 Teachers, and over 2 Lakh students.

More control at a much lesser cost with TrueFoundry

Application Architecture with TrueFoundry

With a minimal effort of less than 10 hours, the Wadhwani AI team was able to realize a significant improvement in speed, control, and costs. Some of the major changes that they realized were:

More Control and Visibility Developer Independence

The Data Scientists and Machine Learning Engineers were able to configure multiple elements which were either difficult for them to do through the AWS console or they had to rely on the engineering team:

Configuring GPU node Auto-scaling policy

Based on queue length and increasing the maximum number of replicas/nodes to 70 instead of the previous limit of 20

Setting up time-based auto-scaling

Since most of the pilot traffic came in during school hours when the teachers interacted with the students, there were minimal requests if any during the evening and night. The team was able to set up a scaling schedule with which the pods scaled down to a minimum during the down hours (evening and nights). This was able to save about 15-20% of the pilot cost.

Utilization metrics and suggestions

The team was able to easily monitor the traffic, resource utilization, and responses directly from the TrueFoundry UI. They also received suggestions through the platform whenever there was an overprovisioning or underprovisioning of resources

"For me the biggest differentiator working with TrueFoundry was the ease of usage and the quick response and support provided by the team. I was able to setup and migrate our entire code base in less than 1 day which was amazing. During the pilot and whenever we had any doubts or request the TrueFoundry team was available immediately to solve our doubts and support us. Besides these factors we are getting a massive cost reduction which is super helpful for the project."

- Jatin Agrawal, Machine Learning Scientist @ Wadhwani AI

TrueFoundry helped the team scale while decreasing costs

5X faster scaling

To test scaling with TrueFoundry the team sent a burst of 88 requests to the application and benchmarked the performance of Sagemaker vs. TrueFoundry. All the configurations of the system were kept constant like the scaling logic (based on the length of the backlog queue, the initial number of nodes, instance type, etc.)

We realized that TrueFoundry was able to scale up 78% faster than Sagemaker which gave the user much faster responses. The end-to-end time taken to respond to the query was 40% less with TrueFoundry.

Autoscaling Test Results (g5.xlarge, 2 Workers, 88 requests)
AWS Sagemaker TrueFoundry
Total Time to process all 88 requests 660s 395.9s
Time to scale up (1 worker to 2 worker) 9 min 2 min
Time before AutoScaler was triggered 2 min 30 secs 15 secs

50% lower cost

The cost that the team was incurring for the pilot was reduced by ~50% by moving to TrueFoundry, this was enabled by the following contributing factors:

  1. ~25-30% Reduction - Use of bare Kubernetes: Sagemaker ML Instances come with an upmark of 25-40% for the same instance when provisioned directly on EKS. Since TrueFoundry runs on EKS directly, the team saved a lot of costs here
  2. ~15-20% Reduction - Time-based autoscaling: The team was able to schedule the downscaling of pods when they expected lower traffic to the application. This was able to save the team 15-20% of the cloud costs.
  3. ~20-30% Reduction - Use of Spot instances: Spot instances are part of the unutilized infra of Cloud providers that they give out at 50-60% discounts. By enabling a simple flag in the UI, the team is able to use a mix of spot and on-demand instances. Spot instances have the risk of getting de-provisioned but TrueFoundry has built the reliability layer that ensures that even with spot instances the mix of on-demand and spot instances is managed to provide users with a reliable level of availability.

High GPU Availability with Lower Costs

While Sagemaker was limited by the availability of GPU instances on the same region on AWS, TrueFoundry is able to add worker nodes to the system that could be across any region or cloud provider.
This means that:

  1. High GPU Availability from multiple cloud providers/regions: Users can spin up nodes in a different region of AWS that has higher GPU availability or with other cloud providers like E2E networks, RunPod, Azure, GCP or others. This is critical since every company has been facing GPU quota limitations and to ensure the reliability of the system it is necessary to have this kind of backup.
  2. Cost Reduction: Different cloud providers have different pricing of GPU instances. This can vary by even 40-80% between one provider and another. TrueFoundry lets the user connect any GPU provider to a singular control plane and allows seamless scaling across these cloud vendors with the option to choose a lower-cost vendor if they have the availability to save on the costs.

Use the best tools without any limitations

TrueFoundry provides seamless integration with any tool that the team wanted to use. In AWS this was limited by the design choices taken by AWS and their native integrations. For example, the team wanted to make use of NATS for publishing success/failure messages since it allows the users to subscribe to a certain type of messages, something that AWS native SQS does not currently offer. Making these kinds of choices was made trivial for the Wadhwani AI Team by TrueFoundry

Operate your ML Pipeline from Day 0