Full-Stack AI: React, Sails.js, Azure, & Ollama Integration
Building intelligent web applications requires a cohesive stack that handles everything from user interaction to complex AI inference. This post demonstrates how to integrate React for the frontend, Sails.js for a robust backend, Microsoft Azure for scalable cloud hosting, and Ollama for local large language model (LLM) inference, all powered by TypeScript.
Architectural Overview
Our architecture features a React single-page application communicating with a Sails.js API. Sails.js acts as the orchestrator, interacting with a managed database on Azure and making requests to a locally running Ollama instance for AI capabilities. Azure hosts both our Sails.js application and its database, providing scalability and reliability.
Sails.js Backend with TypeScript
Sails.js provides a convention-over-configuration approach for building Node.js applications, ideal for RESTful APIs and real-time features. We'll use TypeScript for type safety and better maintainability.
First, set up a Sails.js project with TypeScript:
npm install -g sails@latest
sails new my-ai-app --no-frontend
cd my-ai-app
npm install --save-dev typescript ts-node @types/node
Configure tsconfig.json and update package.json scripts to use ts-node for development.
Integrating Ollama API
Our Sails.js backend will expose an endpoint to interact with Ollama. Assume Ollama is running on http://localhost:11434. We'll create a service to encapsulate the Ollama interaction.
api/services/OllamaService.ts:
import axios from 'axios';
interface OllamaResponse {
response: string;
done: boolean;
}
export const OllamaService = {
async generateResponse(prompt: string): Promise<string> {
try {
const response = await axios.post<OllamaResponse>('http://localhost:11434/api/generate', {
model: 'llama2', // Or your preferred model
prompt: prompt,
stream: false
});
return response.data.response;
} catch (error) {
sails.log.error('Error calling Ollama:', error);
throw new Error('Failed to get response from Ollama.');
}
}
};
Database Management with Azure PostgreSQL
For persistent storage, we'll use Azure Database for PostgreSQL, a fully managed relational database service. Sails.js's Waterline ORM makes database interaction seamless. Install the PostgreSQL adapter:
npm install sails-postgresql --save
Configure config/datastores.ts:
import { DatastoreConfig } from '@sailsjs/waterline';
export const datastores: { default: DatastoreConfig } = {
default: {
adapter: 'sails-postgresql',
url: process.env.DATABASE_URL || 'postgresql://user:password@localhost:5432/my_ai_db',
ssl: process.env.NODE_ENV === 'production' ? { rejectUnauthorized: false } : false,
},
};
Create a model to store AI interactions (api/models/Interaction.ts):
import { Model } from 'sails';
interface Interaction extends Model {
prompt: string;
aiResponse: string;
createdAt: string;
updatedAt: string;
}
declare global {
interface GlobalSails {
models: {
interaction: Interaction;
};
}
}
module.exports = {
attributes: {
prompt: { type: 'string', required: true },
aiResponse: { type: 'string', required: true },
},
};
Now, a controller to tie it all together (api/controllers/AiController.ts):
import { Request, Response } from 'express';
declare const OllamaService: typeof import('../../api/services/OllamaService').OllamaService;
export const AiController = {
async chat(req: Request, res: Response) {
const { prompt } = req.body;
if (!prompt) {
return res.badRequest('Prompt is required.');
}
try {
const aiResponse = await OllamaService.generateResponse(prompt);
const newInteraction = await sails.models.interaction.create({
prompt,
aiResponse,
}).fetch();
return res.ok({ prompt, aiResponse, id: newInteraction.id });
} catch (error) {
sails.log.error('AI chat error:', error);
return res.serverError('Failed to process AI request.');
}
},
};
React Frontend with TypeScript
The React frontend will allow users to submit prompts and display AI responses. We'll use fetch or axios to communicate with our Sails.js API.
src/components/AIChat.tsx:
import React, { useState } from 'react';
interface ChatResponse {
prompt: string;
aiResponse: string;
id: string;
}
const AIChat: React.FC = () => {
const [prompt, setPrompt] = useState<string>('');
const [response, setResponse] = useState<string | null>(null);
const [loading, setLoading] = useState<boolean>(false);
const [error, setError] = useState<string | null>(null);
const handleSubmit = async (e: React.FormEvent) => {
e.preventDefault();
setLoading(true);
setError(null);
setResponse(null);
try {
const res = await fetch('/ai/chat', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ prompt }),
});
if (!res.ok) {
throw new Error(`HTTP error! status: ${res.status}`);
}
const data: ChatResponse = await res.json();
setResponse(data.aiResponse);
} catch (err: any) {
setError(err.message || 'An unknown error occurred.');
} finally {
setLoading(false);
}
};
return (
<div>
<h1>AI Chat with Ollama</h1>
<form onSubmit={handleSubmit}>
<input
type="text"
value={prompt}
onChange={(e) => setPrompt(e.target.value)}
placeholder="Enter your prompt..."
disabled={loading}
/>
<button type="submit" disabled={loading}>
{loading ? 'Generating...' : 'Ask AI'}
</button>
</form>
{error && <p style={{ color: 'red' }}>Error: {error}</p>}
{response && (
<div>
<h2>AI Response:</h2>
<p>{response}</p>
</div>
)}
</div>
);
};
export default AIChat;
Deployment with Microsoft Azure
Deploying this application to Azure involves a few key services:
- Azure App Service: Host your Sails.js backend. It supports Node.js applications and integrates well with GitHub for CI/CD. Ensure environment variables like
DATABASE_URLare configured. - Azure Database for PostgreSQL: The managed database service we configured earlier. Connect your App Service to this database securely.
- Azure Container Instances / Azure Kubernetes Service: For hosting Ollama, you can containerize it and deploy it to ACI or AKS, allowing your Sails.js backend to communicate with it over the network.
Best Practices & Considerations
- Error Handling: Implement robust error handling on both frontend and backend, providing meaningful feedback to users.
- Security: Protect your API endpoints, validate input, and secure sensitive data. Use environment variables for database credentials.
- Scalability: Azure App Service and Azure Database for PostgreSQL offer scaling options. Monitor performance and adjust resources as needed.
- Ollama Hosting: For production, consider running Ollama in a dedicated VM or containerized environment on Azure with sufficient GPU resources if required for larger models.
Conclusion
By integrating React, Sails.js, Azure, and Ollama, you can build powerful, intelligent web applications that leverage local LLM capabilities within a scalable cloud infrastructure. This stack offers flexibility, performance, and a streamlined development experience with TypeScript.