cd..blog

Full-Stack AI: Integrating Vue, AWS, Vercel, and Google Gemini

const published = "Oct 21, 2025, 10:52 PM";const readTime = 5 min;
vuejsawsvercelgeminiapitypescript
Explore a robust stack for AI-powered web apps: Vue.js frontend, AWS serverless backend, Vercel deployment, and Google Gemini API integration. Learn about TypeScript, performance, and best practices.

Full-Stack AI: Integrating Vue, AWS, Vercel, and Google Gemini

As AI capabilities become increasingly sophisticated, integrating them into modern web applications is crucial for creating dynamic and intelligent user experiences. This post demonstrates how to build a performant, scalable, and type-safe AI application by synergizing Vue.js for the frontend, Amazon Web Services (AWS) for a serverless backend, Vercel for seamless deployment, and the Google Gemini API for powerful AI capabilities. All code examples use TypeScript for enhanced developer experience and maintainability.

The Integrated Architecture: A Synergy of Services

Our application architecture follows a decoupled, serverless approach, maximizing scalability, cost-efficiency, and development agility:

  1. Vue.js Frontend: Built with Vue 3 and TypeScript, the single-page application (SPA) provides the rich, interactive user interface.
  2. Vercel Deployment: Vercel hosts the Vue.js application, leveraging its global CDN for low-latency delivery and offering automatic deployments from Git.
  3. AWS API Gateway & Lambda: An AWS API Gateway exposes a RESTful endpoint, which triggers an AWS Lambda function. This function acts as a secure proxy to the Gemini API.
  4. Google Gemini API: The Lambda function interacts with the Gemini API to process prompts and generate AI-driven responses.

This setup ensures the sensitive Gemini API key is never exposed to the client, while AWS Lambda provides a scalable, on-demand compute environment.

Frontend Development with Vue.js & TypeScript

We start with a Vue 3 project, preferably created with Vite for its speed and native TypeScript support. Our frontend component will manage user input and display AI responses.

// src/types/index.ts
export interface Message {
  id: string;
  text: string;
  sender: 'user' | 'gemini';
  timestamp: Date;
}

// src/components/GeminiChat.vue
<script setup lang="ts">
import { ref } from 'vue';
import axios from 'axios'; // npm install axios
import type { Message } from '../types';

const messages = ref<Message[]>([]);
const userInput = ref('');
const isLoading = ref(false);

const API_BASE_URL = import.meta.env.VITE_AWS_API_URL; // From Vercel env vars

const sendMessage = async () => {
  if (!userInput.value.trim() || isLoading.value) return;

  const userMessage: Message = {
    id: Date.now().toString(),
    text: userInput.value,
    sender: 'user',
    timestamp: new Date(),
  };
  messages.value.push(userMessage);
  const currentInput = userInput.value; // Capture input before clearing
  userInput.value = '';
  isLoading.value = true;

  try {
    const response = await axios.post<{ geminiResponse: string }>(
      `${API_BASE_URL}/gemini-chat`,
      { prompt: currentInput },
      { headers: { 'Content-Type': 'application/json' } }
    );
    const geminiMessage: Message = {
      id: Date.now().toString(),
      text: response.data.geminiResponse,
      sender: 'gemini',
      timestamp: new Date(),
    };
    messages.value.push(geminiMessage);
  } catch (error) {
    console.error('Error calling Gemini API:', error);
    messages.value.push({
      id: Date.now().toString(),
      text: 'Error: Could not get a response from Gemini.',
      sender: 'gemini',
      timestamp: new Date(),
    });
  } finally {
    isLoading.value = false;
  }
};
</script>

<template>
  <div class="chat-container">
    <div v-for="msg in messages" :key="msg.id" :class="['message', msg.sender]">
      {{ msg.text }}
    </div>
    <input v-model="userInput" @keyup.enter="sendMessage" :disabled="isLoading" placeholder="Ask Gemini..." />
    <button @click="sendMessage" :disabled="isLoading">{{ isLoading ? 'Sending...' : 'Send' }}</button>
  </div>
</template>

This GeminiChat.vue component demonstrates basic chat functionality. Notice the use of import.meta.env for accessing environment variables securely managed by Vercel for the frontend build.

Serverless Backend with AWS Lambda & API Gateway

AWS Lambda provides a cost-effective, auto-scaling environment for our API logic. We'll use API Gateway to expose our Lambda function via a public HTTP endpoint.

// aws/lambda/src/handlers/geminiChat.ts
import { GoogleGenerativeAI } from '@google/generative-ai';
import { APIGatewayProxyEventV2, APIGatewayProxyResultV2 } from 'aws-lambda';

const genAI = new GoogleGenerativeAI(process.env.GEMINI_API_KEY as string); // Securely loaded via Lambda env vars
const model = genAI.getGenerativeModel({ model: "gemini-pro" }); // Or "gemini-1.5-pro", etc.

export const handler = async (event: APIGatewayProxyEventV2): Promise<APIGatewayProxyResultV2> => {
  try {
    // Ensure CORS headers are included for frontend requests
    const headers = {
      'Content-Type': 'application/json',
      'Access-Control-Allow-Origin': '*',
      'Access-Control-Allow-Methods': 'POST,OPTIONS',
      'Access-Control-Allow-Headers': 'Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token',
    };

    // Handle preflight OPTIONS request from browser
    if (event.requestContext.http.method === 'OPTIONS') {
      return { statusCode: 200, headers };
    }

    const { prompt } = JSON.parse(event.body || '{}');

    if (!prompt) {
      return { statusCode: 400, headers, body: JSON.stringify({ message: 'Prompt is required.' }) };
    }

    const result = await model.generateContent(prompt);
    const response = await result.response;
    const text = response.text();

    return {
      statusCode: 200,
      headers,
      body: JSON.stringify({ geminiResponse: text }),
    };
  } catch (error) {
    console.error('Error interacting with Gemini API:', error);
    return {
      statusCode: 500,
      headers,
      body: JSON.stringify({ message: 'Failed to get response from Gemini.' }),
    };
  }
};

Deploying this Lambda function and exposing it via API Gateway can be managed efficiently using infrastructure-as-code tools like the Serverless Framework or AWS CDK.

Seamless Deployment with Vercel

Vercel simplifies the deployment of our Vue.js application. Connect your Git repository (e.g., GitHub, GitLab) to Vercel, and it automatically detects your framework and configures builds. Key Vercel features for this stack:

  • Automatic Deployments: Every push to your main branch triggers a production deployment.
  • Environment Variables: Securely store VITE_AWS_API_URL within Vercel's project settings. This variable is injected into your Vue.js build at compile time.
  • Global CDN: Your frontend is distributed globally, ensuring low-latency access for users worldwide.
  • Instant Rollbacks: Easily revert to previous deployments if issues arise.

Performance Optimization: Speed and Scalability

Performance is paramount for a good user experience. Here's how we optimize this stack:

  1. Frontend (Vue.js):

    • Lazy Loading: Use defineAsyncComponent for routes and non-critical components to reduce initial bundle size and load only what's needed. For example: const MyComponent = defineAsyncComponent(() => import('./MyComponent.vue')).
    • Code Splitting: Vite automatically handles code splitting, but ensure dynamic imports (import()) are used for larger modules.
    • Efficient List Rendering: Always use a unique :key with v-for to enable Vue's efficient DOM reconciliation.
    • Image Optimization: While Vercel offers built-in image optimization, consider modern formats like WebP or AVIF and responsive images.
  2. Backend (AWS Lambda):

    • Cold Starts: For rarely invoked functions, initial requests might experience a slight delay ("cold start"). For critical APIs, consider Lambda Provisioned Concurrency to keep functions pre-initialized.
    • Monitoring: Utilize AWS CloudWatch for monitoring Lambda invocation times, errors, and performance metrics to identify bottlenecks.
  3. Network & Deployment (Vercel):

    • Global CDN: Vercel's CDN significantly reduces frontend asset load times.
    • Edge Functions: For more complex edge logic, Vercel's Edge Functions (built on AWS Lambda@Edge) can further optimize response times for geolocation-aware routing or caching.

Regularly run Lighthouse audits and monitor Web Vitals to ensure your application remains performant.

Conclusion

By integrating Vue.js, AWS serverless, Vercel, and the Google Gemini API, we've crafted a modern, powerful, and highly scalable AI application. This stack offers a superior developer experience with TypeScript, robust performance through serverless architecture and CDN, and flexible deployment. As AI continues to evolve, this combination provides a solid foundation for building the next generation of intelligent web applications.