In today’s fast-paced sales environments, separating high-quality leads from noise is more critical than ever. Manual qualification is time-consuming and error-prone — but with the right automation stack, it doesn’t have to be. In this article, we’ll walk through how to build a lean, intelligent lead qualification workflow using Rig, as well as a Google Sheets MCP server. Whether you're a solo founder or scaling a sales team, this setup offers a flexible, AI-powered pipeline to help you prioritize the leads that matter most — all while keeping your favorite tools in play.
A very common example where this can come into play might be analysing the results of Google Forms submissions (or, actually, any website that provides form submissions and can be exported to Google Sheets!). By creating an LLM-assisted data pipeline, we can significantly cut down the time required to be able to do the admin work and you can continue doing the parts of your work that can't be easily automated away.
Interested in just getting straight ahead to trying it out? Here's the GitHub repo.
Running Google Sheets with MCP
While Google Sheets doesn't officially have their own MCP server, there are quite a few unofficial servers you can use. Here's one I used which supports both OAuth as well as service account credentials. I've also additionally checked the code out and ensured it is 100% safe to use - it's written using FastMCP.
To create a service account (that the MCP server can use) you'll need to do the following:
- Create a project in Google Cloud Console (if you don't have one already or need to make a new one)
- Enable the Google Drive API and Google Sheets API for your project
- Go to the Credentials screen, create a service account and download the credentials file (you'll need this later!). Additionally ensure you remember the email.
- Create a new Google Drive folder and give Editor permissions to your service account (through using the email provided from the service account menu).
- You should now be ready to go!
Note that you'll need to store the ID of the Google Drive folder you're using. You can find it from the URL like so:
https://drive.google.com/drive/u/0/folders/{ID_HERE}
Getting Started
Talking to our MCP server
Before we do anything else, we need to get the following values as environment variables specifically for our MCP server:
- The path to our credentials file (for usage with Google service accounts)
- The ID of the folder where we've given our service account access to as environment variables
Next, we need to git clone
the repository:
git clone https://github.com/xing5/mcp-google-sheets
Ensure that you put your downloaded service account credentials file into the newly cloned git folder! This will allow the MCP server to find your credentials. You'll also need to set your service account path (wherever your credentials .json
file is) and Google Drive folder ID as environment variables:
export SERVICE_ACCOUNT_PATH=./credentials.json
export DRIVE_FOLDER_ID=my-folder-id-here
Once done, simply use cargo run
to start the server in SSE (Server-Sent Events) mode. This means it essentially spins up a web server which emits messages from a given SSE path. Server Sent Events are a great way to pass messages from the server to the client while not requiring a constant 2-way connection as the client doesn't need to send anything back to the server.
Project setup
To get started with creating our project, you'll need to run the following to initialise your Rust application:
cargo init gsheets-mcp-rig
Next, you'll want to install the project dependencies - which you can do with the one liner below:
cargo add anyhow mcp-core rig-core tokio -F rig-core/mcp,tokio/macros,\
tokio/rt-multi-thread
What did we just add?
-
mcp-core
: A crate for working with MCP. -
rig-core
: The Rig core crate. Uses themcp
feature for integration withmcp-core
. -
tokio
: An async Rust runtime. Uses themcaros
andrt-multi-thread
features for convenient async runtime setup.
Additionally, you'll also want to set an OpenAI API key as an environment variable.
You can do this with the one-liner below:
export OPENAI_API_KEY=my-key-here
Building our project
Now for the fun part: building our project!
To start with, we will create a simple function that will attempt to connect to our MCP server by its URL and port (in this case, localhost:8000
assuming you have it running locally). We then feed the command into the ClientSseTransport::new
function to start the program up, then initiate a connection and return the client itself.
async fn connect_to_gsheets_mcp()
-> Result<mcp_core::client::Client<ClientSseTransport>, Box<dyn std::error::Error>> {
println!("Loading Google Sheets MCP server...");
// Create the MCP client
let client_transport = ClientSseTransport::new(
"http://localhost:8000"
)
.unwrap();
let mcp_client = ClientBuilder::new(client_transport).build();
// Start the MCP client
mcp_client.open().await?;
Ok(mcp_client)
}
Next, we need to fetch our list of tools from the MCP server and create a ToolSet
as well as a Vec
- both of which will be used to invoke functions as well as help the LLM select which tool to use. To break this function down:
- We use the
fold()
function, which takes an accumulator (ToolSet
andVec
which are both contained in a tuple) - Within the closure, the tuple (of accumulators) is on the left hand side with the iterator return value being on the right hand side
- We then create a
rig::tools::McpTool
and and it to theToolSet
while also creating aToolDefinition
and pushing it to theVec
fn get_tools_from_mcp_tool_response(
tools_list_res: ToolsListResponse,
mcp_client: mcp_core::client::Client<ClientSseTransport>,
) -> (ToolSet, Vec<ToolDefinition>) {
let (tools, tooldefs) = tools_list_res.tools.into_iter().fold(
(ToolSet::builder().build(), Vec::new()),
|(mut tools, mut tooldefs), tool| {
let mcp_tool = McpTool::from_mcp_server(tool.clone(), mcp_client.clone());
tools.add_tool(mcp_tool);
let tooldef = ToolDefinition {
description: tool.description.unwrap_or(String::new()),
name: tool.name,
parameters: tool.input_schema.clone(),
};
tooldefs.push(tooldef);
(tools, tooldefs)
},
);
(tools, tooldefs)
}
Creating our reasoning loop
Now that we've fetched our tools from the MCP server, let's add them to our completion request.
We also want to make our application competent: it should be fully capable of incorporating tool responses, as well as returning an answer from the LLM without the user needing to do anything.
Before we start, let's create a couple of helper methods. The first one will simply be a message that converts an ID and a content field into a UserContent::tool_result
.
fn tool_result_from_text_response(id: impl Into<String>, content: String) -> UserContent {
let content = OneOrMany::one(ToolResultContent::Text(content.into()));
UserContent::tool_result(id, content)
}
Now we can implement our response loop! This is a relatively long function, so buckle up. Here's what the function will do:
- Create a
CompletioRequestBuilder
and fill it out with all required items including tool definitions, prompts and chat history. - Send the completion request.
- Match the latest message:
- If it's a text message, add prompt and assistant message to chat history then just return the text
- If it's a tool call, attempt to call the function (using the toolset we initialised)
- If the tool call results in an error, set the tool call error as the the prompt (explained below)
- If it was successful, push both the original prompt and assistant message to the chat history, then set the prompt as the tool call result (explained below)
When we're sending a completion request to the model, the tool prompt resolves to a message with the tool
role. This allows us to continue the conversation without us explicitly having to send a prompt ourselves.
Now for the code! Let's begin by initialising our toolset and filling out the completion request, then sending it. Note that although the function does use generics, we don't need to specify what these are when we're actually calling this function as there is only one model we're using (the OpenAI one):
async fn call_until_response<M: CompletionModel>(
mut prompt: Message,
model: &M,
preamble: &str,
chat_history: &mut Vec<Message>,
toolset: &ToolSet,
tooldefs: Vec<ToolDefinition>,
) -> Result<String, anyhow::Error> {
loop {
let request = CompletionRequestBuilder::new(model.clone(), prompt.to_owned())
.preamble(preamble.to_owned())
.messages(chat_history.clone())
.temperature(0.0)
.max_tokens(1024)
.tools(tooldefs.clone())
.build();
// call model
let resp = model
.completion(request)
.await
.map_err(|x| anyhow::anyhow!("Error when prompting: {x}"))?;
// rest of code goes down here!
}
}
Next, we need to match the response and see what type the response is. If it's text, return it; otherwise, try to execute the function. If the function fails, set the prompt to be the tool call result and recursively call the function again to get a string response and return the string. Otherwise, the variable assignment works as normal and we can continue.
// keep calling tools until we get human readable answer from the model
match resp.choice.first() {
AssistantContent::Text(text) => {
let text = text.text;
chat_history.push(prompt.clone().into());
chat_history.push(Message::assistant(&text));
return Ok(text);
}
AssistantContent::ToolCall(tool_call) => {
// Call the tool
let tool_response = toolset
.call(
&tool_call.function.name,
tool_call.function.arguments.to_string(),
)
.await;
let tool_response = match tool_response {
Ok(res) => res,
Err(e) => {
chat_history.push(prompt.clone());
chat_history.push(Message::Assistant {
content: OneOrMany::one(AssistantContent::ToolCall(tool_call.clone())),
});
prompt = Message::User {
content: OneOrMany::one(UserContent::ToolResult(ToolResult {
id: tool_call.id.to_string(),
content: OneOrMany::one(ToolResultContent::Text(rig::message::Text {
text: e.to_string(),
})),
})),
};
continue;
}
};
}
}
}
Finally, we convert the text returned from the tool to a tool response message, then push the original prompt & assistant message to the chat history. We then (optionally) print the tool response message, as well as setting the "user" prompt to be the tool response message and then call the call_until_response
function recursively to get our String result (and then return the String).
let tool_response_message = tool_result_from_text_response(tool_call.id.clone(), tool_response);
let tool_call = OneOrMany::one(AssistantContent::ToolCall(tool_call));
// add tool call and response into chat history and continue the loop
chat_history.push(prompt.clone().into());
chat_history.push(Message::Assistant { content: tool_call });
println!("{tool_response_message:?}");
let tool_result_message = Message::User {
content: OneOrMany::one(tool_response_message),
};
prompt = tool_result_message;
After this, the function will loop back around.
Hooking everything back up
Now that we've actually wrote all the difficult parts of the code, now is the easy part: taking the input and writing our main function!
To take input from the terminal, we simply use stdio().read_line
in a function that returns a String:
fn take_input() -> String {
let mut str = String::new();
print!("> ");
stdout()
.flush()
.expect("Failed to flush stdout - this should normally never happen");
stdin().read_line(&mut str).expect("to read line");
str
}
Next is the main function. We need to create our preamble (system message), then we initialise everything and make a loop, taking input from the user and using our LLM reasoning loop to get a response (then printing the response). We will also add a quit
command to be able to break the loop.
const PREAMBLE: &str = r###"You are an agent designed to qualify sales leads from Google Sheets.
Users will typically ask you to qualify leads from Google Forms submissions
(or imported spreadsheets from results of other form submission-type applications).
Your job is to qualify sales leads based on the user's criteria.
If they don't give you a criteria for qualification,
ask what demographic the user is trying to capture with the form and qualify leads based off of that.
When creating the results, use a new sheet in the spreadsheet file the user has provided you with.
When done, specify the location of the sheet so that the user can inspect the result for themselves."###;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let mcp_client = connect_to_stripe_mcp().await?;
let tools_list_res = mcp_client.list_tools(None, None).await?;
let (tools, tooldefs) = get_tools_from_mcp_tool_response(tools_list_res, mcp_client);
let openai_client = providers::openai::Client::from_env();
let model = openai_client.completion_model("gpt-4o");
println!("Hi! How can I help you today? (write \"quit\" to exit)");
println!("------------");
loop {
let prompt = take_input();
println!("------------");
if prompt == *"quit" {
println!("Thanks for using me! I am quitting now.");
break;
}
let mut chat_history = Vec::new();
let res = call_until_response(
prompt.into(),
&model,
PREAMBLE,
&mut chat_history,
&tools,
tooldefs.clone(),
)
.await
.unwrap();
println!("{res}");
println!("------------");
}
Ok(())
}
Extending this demo
So, now that we're done: how can we extend this demo? Below are a couple of ideas:
- Switch to a webpage interface: While this example does work, if you plan on mass adoption you may instead want to try hosting this as a webpage.
- Add some personality to your chatbot: Depending on your target audience or customer base, adapting the chatbot's personality by prompt engineering the preamble (system message) can make it much easier for your audience to approach using an AI-assisted interface. Don't forget that you can use guardrails, semantic routing and other techniques to improve the effectiveness of your chatbot as well as preventing any potential disasters! ## Finishing up Thanks for reading! Hopefully this has helped you learn a little bit more about how you can use MCP servers in production, as well as how Rig can help you achieve that goal.
For additional Rig resources and community engagement:
Check out more examples in our gallery.
Contribute or report issues on our GitHub.
Join discussions in our Discord community!