Innovation for Inclusion: Hack.The.Bias with Amazon SageMaker
Everyone can easily get started with machine learning (ML) using Amazon SageMaker JumpStart… In this post, we show you how a university Hackathon team used SageMaker JumpStart to quickly build an application that helps users identify and remove biases…“Amazon SageMaker was instrumental i…
This post was co-authored with Daniele Chiappalupi, participant of the AWS student Hackathon team at ETH Zürich.
Everyone can easily get started with machine learning (ML) using Amazon SageMaker JumpStart. In this post, we show you how a university Hackathon team used SageMaker JumpStart to quickly build an application that helps users identify and remove biases.
“Amazon SageMaker was instrumental in our project. It made it easy to deploy and manage a pre-trained instance of Flan, offering us a solid foundation for our application. Its auto scaling feature proved crucial during high-traffic periods, ensuring that our app remained responsive and users received a steady and fast bias analysis. Further, by allowing us to offload the heavy task of querying the Flan model to a managed service, we were able to keep our application lightweight and swift, enhancing user experience across various devices. SageMaker’s features empowered us to maximize our time at the hackathon, allowing us to focus on optimizing our prompts and app rather than managing the model’s performance and infrastructure.”
– Daniele Chiappalupi, participant of the AWS student Hackathon team at ETH Zürich.
Solution overview
The theme of the Hackathon is to contribute to the UN sustainable goals with AI technology. As shown in the following figure, the application built at the Hackathon contributes to three of the Sustainable Development Goals (quality education, targeting gender-based discrimination, and reduced inequalities) by helping users identify and remove biases from their text in order to promote fair and inclusive language.
As shown in the following screenshot, after you provide the text, the application generates a new version that is free from racial, ethnical, and gender biases. Additionally, it highlights the specific parts of your input text related to each category of bias.
In the architecture shown in the following diagram, users input text in the React-based web app, which triggers Amazon API Gateway, which in turn invokes an AWS Lambda function depending on the bias in the user text. The Lambda function calls the Flan model endpoint in SageMaker JumpStart, which returns the unbiased text result via the same route back to the front-end application.
Application development process
The process of developing this application was iterative and centered on two main areas: user interface and ML model integration.
We chose React for the front-end development due to its flexibility, scalability, and powerful tools for creating interactive user interfaces. Given the nature of our application—processing user input and presenting refined results—React’s component-based architecture proved ideal. With React, we could efficiently build a single-page application that allowed users to submit text and see de-biased results without the need for constant page refreshes.
The text entered by the user needed to be processed by a powerful language model to scrutinize for biases. We chose Flan for its robustness, efficiency, and scalability properties. To utilize Flan, we used SageMaker JumpStart, as shown in the following screenshot. Amazon SageMaker made it easy to deploy and manage a pre-trained instance of Flan, allowing us to focus on optimizing our prompts and queries rather than managing the model’s performance and infrastructure.
Connecting the Flan model to our front-end application required a robust and secure integration, which was achieved using Lambda and API Gateway. With Lambda, we created a serverless function that communicates directly with our SageMaker model. We then used API Gateway to create a secure, scalable, and readily accessible endpoint for our React app to invoke the Lambda function. When a user submitted text, the app triggered a series of API calls to the gateway—first to identify if any bias was present, then, if necessary, additional queries to identify, locate, and neutralize the bias. All these requests were routed through the Lambda function and then to our SageMaker model.
Our final task in the development process was the selection of prompts to query the language model. Here, the CrowS-Pairs dataset played an instrumental role because it provided us with real examples of biased text, which we utilized to fine-tune our requests. We selected the prompts by an iterative process, with the objective of maximizing accuracy in bias detection within this dataset.
Wrapping up the process, we observed a seamless operational flow in the finished application. The process begins with a user submitting text for analysis, which is then sent via a POST request to our secure API Gateway endpoint. This triggers the Lambda function, which communicates with the SageMaker endpoint. Consequently, the Flan model receives a series of queries. The first checks for the presence of any biases in the text. If biases are detected, additional queries are deployed to locate, identify, and neutralize these biased elements. The results are then returned through the same path—first to the Lambda function, then through the API Gateway, and ultimately back to the user. If any bias was present in the original text, the user receives a comprehensive analysis indicating the types of biases detected, whether racial, ethnic, or gender. Specific sections of the text where these biases were found are highlighted, giving users a clear view of the changes made. Alongside this analysis, a new, de-biased version of their text is presented, effectively transforming potentially biased input into a more inclusive narrative.
In the following sections, we detail the steps to implement this solution.
Set up the React environment
We began by setting up our development environment for React. For bootstrapping a new React application with minimal configuration, we used create-react-app:
npx create-react-app my-app
Build the user interface
Using React, we designed a simple interface for users to input text, with a submission button, a reset button, and overlaying displays for presenting the processed results when they’re available.
Initiate the Flan model on SageMaker
We used SageMaker to create a pre-trained instance of the Flan language model with an endpoint for real-time inference. The model can be used against any JSON-structured payload like the following:
Create a Lambda function
We developed a Lambda function that interacted directly with our SageMaker endpoint. The function was designed to receive a request with the user’s text, forward it to the SageMaker endpoint, and return the refined results, as shown in the following code (ENDPOINT_NAME
was set up as the SageMaker instance endpoint):
Set up API Gateway
We configured a new REST API in API Gateway and linked it to our Lambda function. This connection allowed our React application to make HTTP requests to the API Gateway, which subsequently triggered the Lambda function.
Integrate the React app with the API
We updated the React application to make a POST request to the API Gateway when the submit button was clicked, with the body of the request being the user’s text. The JavaScript code we used to perform the API call is as follows (REACT_APP_AWS_ENDPOINT
corresponds to the API Gateway endpoint bound to the Lambda call):
Optimize prompt selection
To improve the accuracy of bias detection, we tested different prompts against the CrowS-Pairs dataset. Through this iterative process, we chose the prompts that gave us the highest accuracy.
Deploy and test the React app on Vercel
After building the application, we deployed it on Vercel to make it publicly accessible. We conducted extensive tests to ensure the application functioned as expected, from the user interface to the responses from the language model.
These steps laid the groundwork for creating our application for analyzing and de-biasing text. Despite the inherent complexity of the process, the use of tools like SageMaker, Lambda, and API Gateway streamlined the development, allowing us to focus on the core goal of the project—identifying and eliminating biases in text.
Conclusion
SageMaker JumpStart offers a convenient way to explore the features and capabilities of SageMaker. It provides curated one-step solutions, example notebooks, and deployable pre-trained models. These resources allow you to quickly learn and understand SageMaker. Additionally, you have the option to fine-tune the models and deploy them according to your specific needs. Access to JumpStart is available through Amazon SageMaker Studio or programmatically using the SageMaker APIs.
In this post, you learned how a student Hackathon team developed a solution in a short time using SageMaker JumpStart, which shows the potential of AWS and SageMaker JumpStart in enabling rapid development and deployment of sophisticated AI solutions, even by small teams or individuals.
To learn more about using SageMaker JumpStart, refer to Instruction fine-tuning for FLAN T5 XL with Amazon SageMaker Jumpstart and Zero-shot prompting for the Flan-T5 foundation model in Amazon SageMaker JumpStart.
ETH Analytics Club hosted ‘ETH Datathon,’ an AI/ML hackathon that draws more than 150 participants from ETH Zurich, University of Zurich, and EPFL. The event features workshops led by industry leaders, a 24-hour coding challenge, and valuable networking opportunities with fellow students and industry professionals. Great thanks to the ETH Hackathon team: Daniele Chiappalupi, Athina Nisioti, and Francesco Ignazio Re, as well as the rest of AWS organizing team: Alice Morano, Demir Catovic, Iana Peix, Jan Oliver Seidenfuss, Lars Nettemann, and Markus Winterholer.
The content and opinions in this post are those of the third-party author and AWS is not responsible for the content or accuracy of this post.
About the authors
Jun Zhang is a Solutions Architect based in Zurich. He helps Swiss customers architect cloud-based solutions to achieve their business potential. He has a passion for sustainability and strives to solve current sustainability challenges with technology. He is also a huge tennis fan and enjoys playing board games a lot.
Mohan Gowda leads Machine Learning team at AWS Switzerland. He works primarily with Automotive customers to develop innovative AI/ML solutions and platforms for next generation vehicles. Before working with AWS, Mohan worked with a Global Management Consulting firm with a focus on Strategy & Analytics. His passion lies in connected vehicles and autonomous driving.
Matthias Egli is the Head of Education in Switzerland. He is an enthusiastic Team Lead with a broad experience in business development, sales, and marketing.
Kemeng Zhang is an ML Engineer based in Zurich. She helps global customers design, develop, and scale ML-based applications to empower their digital capabilities to increase business revenue and reduce cost. She is also very passionate about creating human-centric applications by leveraging knowledge from behavioral science. She likes playing water sports and walking dogs.
Daniele Chiappalupi is a recent graduate from ETH Zürich. He enjoys every aspect of software engineering, from design to implementation, and from deployment to maintenance. He has a deep passion for AI and eagerly anticipates exploring, utilizing, and contributing to the latest advancements in the field. In his free time, he loves going snowboarding during colder months and playing pick-up basketball when the weather warms up.
Author: Jun Zhang