Building an AI-Powered App with Koa, OpenAI, TypeScript, and Heroku
Modern web development often demands combining powerful backend frameworks with intelligent services and scalable deployment. This guide demonstrates how to integrate Koa, the OpenAI API, and TypeScript into a single application, then deploy it effortlessly to Heroku.
Setting Up the Project
We'll start by initializing a new TypeScript project. Koa provides a lightweight yet powerful foundation for our API. Koa's official documentation offers an excellent deep dive into its middleware-centric design.
First, set up your package.json and install dependencies:
{
"name": "koa-openai-app",
"version": "1.0.0",
"description": "",
"main": "dist/server.js",
"scripts": {
"start": "node dist/server.js",
"build": "tsc",
"dev": "ts-node-dev --respawn src/server.ts"
},
"keywords": [],
"author": "",
"license": "ISC",
"dependencies": {
"@koa/router": "^12.0.1",
"dotenv": "^16.4.5",
"koa": "^2.15.2",
"koa-bodyparser": "^4.4.1",
"openai": "^4.38.2"
},
"devDependencies": {
"@types/koa": "^2.15.0",
"@types/koa__router": "^12.0.4",
"@types/koa-bodyparser": "^4.3.12",
"@types/node": "^20.12.12",
"ts-node-dev": "^2.0.0",
"typescript": "^5.4.5"
}
}
Next, configure TypeScript by creating a tsconfig.json file. This ensures our TypeScript code compiles correctly to JavaScript. TypeScript's documentation provides comprehensive details on compiler options.
{
"compilerOptions": {
"target": "es2022",
"module": "commonjs",
"rootDir": "./src",
"outDir": "./dist",
"esModuleInterop": true,
"strict": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true
},
"include": ["src/**/*.ts"],
"exclude": ["node_modules"]
}
Building the Koa Server with TypeScript
Our Koa server will expose an endpoint to interact with the OpenAI API. We'll use koa-bodyparser to parse incoming JSON requests and @koa/router for routing. Koa's middleware pattern makes request handling highly modular.
Create src/server.ts:
import Koa from 'koa';
import Router from '@koa/router';
import bodyParser from 'koa-bodyparser';
import dotenv from 'dotenv';
import { generateText } from './openaiService';
dotenv.config(); // Load environment variables
const app = new Koa();
const router = new Router();
app.use(bodyParser());
router.post('/generate', async (ctx) => {
const { prompt } = ctx.request.body as { prompt: string };
if (!prompt) {
ctx.status = 400;
ctx.body = { error: 'Prompt is required' };
return;
}
try {
const response = await generateText(prompt);
ctx.body = { response };
} catch (error) {
console.error('OpenAI API error:', error);
ctx.status = 500;
ctx.body = { error: 'Failed to generate text' };
}
});
app.use(router.routes()).use(router.allowedMethods());
const PORT = process.env.PORT || 3000;
app.listen(PORT, () => {
console.log(`Server running on port ${PORT}`);
});
Integrating the OpenAI API
We'll use the official openai Node.js library to communicate with the OpenAI API. This library simplifies making requests to powerful AI models. OpenAI's API documentation details all available endpoints and models.
Create src/openaiService.ts:
import OpenAI from 'openai';
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY, // Ensure your API key is in .env
});
export async function generateText(prompt: string): Promise<string> {
try {
const chatCompletion = await openai.chat.completions.create({
model: 'gpt-3.5-turbo',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: prompt }
],
max_tokens: 150,
});
return chatCompletion.choices[0].message.content || 'No response generated.';
} catch (error) {
console.error('Error calling OpenAI API:', error);
throw new Error('Failed to get response from OpenAI.');
}
}
Remember to create a .env file in your project root with OPENAI_API_KEY=your_openai_api_key_here.
Deploying to Heroku
Heroku provides a robust platform for deploying web applications with minimal configuration. We'll use a Procfile to tell Heroku how to start our Koa server. Heroku's documentation offers extensive guides on deployment.
Create a Procfile in your project root:
web: npm run build && npm start
This Procfile instructs Heroku to first compile our TypeScript code (npm run build) and then start the resulting JavaScript application (npm start).
Heroku Deployment Steps:
- Install Heroku CLI:
npm install -g heroku - Login:
heroku login - Create App:
heroku create your-app-name(or let Heroku generate one) - Set Environment Variable:
heroku config:set OPENAI_API_KEY=your_openai_api_key_here - Deploy:
git push heroku main
Heroku will detect your Procfile, build your application, and deploy it. Ensure your package.json's start script points to the compiled JavaScript output (dist/server.js).
Conclusion
By integrating Koa's flexible backend, OpenAI's intelligent capabilities, TypeScript's type safety, and Heroku's seamless deployment, you can build and scale powerful AI-driven applications. This stack provides a solid foundation for creating innovative web services that leverage the latest in artificial intelligence.