Welcome! 👋
In this tutorial, we’ll build a modular, production-ready Node.js backend for IdeaLens — an AI assistant that helps creators brainstorm, summarize, caption, and ideate content. We’ll also integrate Groq to access blazing-fast LLMs like LLaMA 3.
📦 Prerequisites
Make sure you have the following:
- Node.js (v16+)
- npm or yarn
- MongoDB Atlas or local
- A Groq API Key
- REST client (Postman, Thunder Client, etc.)
📁 Folder Structure
backend/
├── controllers/
│ └── summary.controller.js
├── routes/
│ └── summary.route.js
├── utils/
│ └── groqClient.js <-- ✨ Groq integration
├── app.js
├── server.js
└── .env
📦 Install Dependencies
npm install express cors cookie-parser dotenv mongoose axios
We’ll use
axios
to call the Groq API.
🧠 Integrating Groq: Ultra-fast Open-Source LLMs
Create utils/groqClient.js
:
const axios = require("axios");
const GROQ_API_KEY = process.env.GROQ_API_KEY;
const groq = axios.create({
baseURL: "https://api.groq.com/openai/v1/chat/completions",
headers: {
"Authorization": `Bearer ${GROQ_API_KEY}`,
"Content-Type": "application/json"
}
});
async function chatWithGroq(messages, model = "llama-3.3-70b-versatile") {
try {
const response = await groq.post("/chat/completions", {
model,
messages,
temperature: 0.7
});
return response.data.choices[0].message.content;
} catch (err) {
console.error("Groq Error:", err.response?.data || err.message);
throw new Error("Groq API failed");
}
}
module.exports = { chatWithGroq };
You can now easily switch between
llama-3.3-70b-versatile
or others.
🔁 Using Groq in a Controller
In controllers/summary.controller.js
:
const { chatWithGroq } = require("../utils/groqClient");
const generateSummary = async (req, res) => {
const { idea } = req.body;
const messages = [
{
role: "system",
content: "You're a creative assistant helping generate content ideas."
},
{
role: "user",
content: `Summarize the following content idea: "${idea}". Include mood, genre, platform suggestions, hooks, and short video direction.`
}
];
try {
const aiResponse = await chatWithGroq(messages);
res.json({ summary: aiResponse });
} catch (error) {
res.status(500).json({ error: error.message });
}
};
module.exports = { generateSummary };
📌 Example Route: /api/summary
In routes/summary.route.js
:
const express = require("express");
const router = express.Router();
const { generateSummary } = require("../controllers/summary.controller");
router.post("/", generateSummary);
module.exports = router;
🔐 Environment Variables
In .env
:
PORT=5000
CLIENT_URL=http://localhost:3000
MONGO_URI=your_mongo_url
GROQ_API_KEY=your_groq_api_key
🛠️ Bootstrapping the App
In app.js
:
const express = require("express");
const cors = require("cors");
const cookieParser = require("cookie-parser");
const summaryRoutes = require("./routes/summary.route");
const app = express();
app.use(cors({ origin: process.env.CLIENT_URL, credentials: true }));
app.use(express.json());
app.use(cookieParser());
app.use("/api/summary", summaryRoutes);
app.get("/", (req, res) => {
res.send("IdeaLens backend is up 🚀");
});
module.exports = app;
In server.js
:
const app = require("./app");
const mongoose = require("mongoose");
require("dotenv").config();
const PORT = process.env.PORT || 5000;
mongoose.connect(process.env.MONGO_URI)
.then(() => {
app.listen(PORT, () =>
console.log(`Server running at http://localhost:${PORT}`)
);
})
.catch((err) => console.error("MongoDB connection failed", err));
📌 Route Overview
Endpoint | Method | Description |
---|---|---|
/api/summary |
POST | Generate structured summary with Groq |
/api/analyze |
POST | Content analysis |
/api/captions |
POST | Generate captions + hashtags |
/api/songs |
POST | Suggest music tracks |
/api/chat |
POST | General AI chat |
/api/refine |
POST | Refine based on user feedback |
/api/upload-media |
POST | Upload audio/video for analysis |
/api/auth |
POST | Auth with JWT & cookies |
All powered by modular routes and AI controller functions.
🎯 Done! Your Groq-Powered Node.js Backend🚀
You’ve now got a fully working Express backend with Groq integrated — giving you high-performance LLM workflows for idea summarization, creative chatting, and content generation in your IdeaLens platform.