\n\n\n\n My API Keys Keep Getting Pwnd: Here’s My Take - BotSec \n

My API Keys Keep Getting Pwnd: Here’s My Take

📖 9 min read1,697 wordsUpdated Apr 9, 2026

Hey everyone, Pat Reeves here, dropping in from the botsec.net bunker. Hope you’re all having a less bot-ridden week than I am. Seriously, it feels like every other news alert these days is about some new botnet flexing its muscles or an API getting pummeled by automated attacks. And honestly, it gets exhausting.

Today, I want to talk about something that’s been bugging me for a while, a specific kind of headache that’s becoming far too common: the escalating problem of API Key Vulnerability in Serverless Functions. Specifically, how easily these keys, which are often the crown jewels of our infrastructure, can become exposed when we’re not careful with our serverless deployments. We’re talking about giving bots the keys to the kingdom, not just a peek through the window.

It’s 2026, and serverless is no longer the new kid on the block; it’s practically mainstream. Everyone from startups to massive enterprises is using AWS Lambda, Azure Functions, Google Cloud Functions, you name it. And for good reason – scalability, cost efficiency, reduced operational overhead. It’s fantastic. But with great power, as they say, comes great responsibility, and in this case, it’s the responsibility to not accidentally broadcast your database credentials to every bot sniffing around.

I’ve seen this play out in various forms, from post-mortems I’ve reviewed to the occasional “oopsie” I’ve even caught myself (thankfully, in dev environments!). The core issue is often a misunderstanding of how environment variables, build processes, and deployment artifacts interact in a serverless context. It’s not just about putting your API key in a .env file and calling it a day. That’s a good start, but it’s not the whole story. Bots are getting smarter, and they’re looking for these slip-ups.

The Serverless Security Blind Spot: Environment Variables vs. Deployment Artifacts

Let’s be blunt: putting your API keys directly into your code is a cardinal sin. Everyone knows that. We’ve moved past hardcoding secrets. The current best practice, and one that many serverless platforms encourage, is using environment variables. You set them during deployment, and your function can access them at runtime. Simple, right?

The problem arises when we confuse runtime environment variables with what gets baked into our deployment package. Imagine you have a Node.js Lambda function. You build it, zip it up, and deploy it. During the build process, if you’re not careful, those environment variables you think are only set at runtime can inadvertently leak into your compiled or transpiled code. Or, even simpler, they can end up in configuration files that are included in your deployment package.

I remember consulting for a small e-commerce startup last year. They had a Lambda function that processed payments, connecting to a third-party API. They were diligent, or so they thought, about using environment variables for their payment gateway API key. However, during a security audit, we discovered that their build process, which involved a custom Webpack configuration, was actually embedding some of these “environment variables” directly into their JavaScript bundle. It wasn’t obvious. You had to decompile the minified JS to find it, but it was there, clear as day. A bot with enough tenacity could have found that. And if that key had been compromised, we’re talking about a payment gateway key – that’s a direct path to financial fraud.

The danger here is that once it’s in the deployment artifact, it’s potentially exposed. If someone gets access to your deployment package (e.g., via a misconfigured S3 bucket, a compromised CI/CD pipeline, or even just through a local developer’s machine), they have your key. And bots are constantly scanning for these sorts of misconfigurations. They’re not just looking for open ports anymore; they’re looking for data leaks.

How Does This Happen in Practice?

There are a few common scenarios I’ve seen:

  1. Build-time Environment Variable Injection: Tools like Webpack, Parcel, or even simple shell scripts can be configured to inject environment variables directly into your code during the build process. If you have process.env.API_KEY and your build script replaces that with the actual value, then that value is now part of your deployed code.
  2. Configuration File Over-inclusion: Developers often use configuration files (e.g., config.json, .env, settings.ini) during local development. If these files contain sensitive keys and aren’t properly excluded from the deployment package (e.g., via .gitignore or serverless deployment exclusions), they’ll go straight to your serverless function’s artifact.
  3. Logging and Debugging Leftovers: Sometimes, during development, we print environment variables or sensitive data to logs for debugging. If these logging statements aren’t removed or properly sanitized before deployment, and if your logs are accessible (even internally), it’s a potential leak.

Practical Steps to Protect Your API Keys in Serverless

Alright, enough doom and gloom. Let’s talk about how to actually fix this. My aim here isn’t to scare you away from serverless, but to arm you with the knowledge to use it safely.

1. Use Dedicated Secrets Management Services

This is the gold standard. Don’t just rely on environment variables set directly at deployment. Instead, use a secrets management service provided by your cloud provider. AWS Secrets Manager, Azure Key Vault, Google Cloud Secret Manager – these are designed for this exact purpose.

The pattern is simple: your serverless function, at runtime, makes a call to the secrets manager to retrieve the key. This means the key is never stored directly in your deployment package or even as a plain environment variable that could accidentally leak. The function’s IAM role (or equivalent) is granted permission to access specific secrets. If a bot somehow gets your deployment package, they’re not getting your API key.

Example (AWS Lambda with Node.js and Secrets Manager):


const AWS = require('aws-sdk');
const secretsManager = new AWS.SecretsManager();

async function getSecret(secretName) {
 try {
 const data = await secretsManager.getSecretValue({ SecretId: secretName }).promise();
 if ('SecretString' in data) {
 return JSON.parse(data.SecretString);
 } else {
 let buff = new Buffer(data.SecretBinary, 'base64');
 return JSON.parse(buff.toString('ascii'));
 }
 } catch (err) {
 console.error("Error retrieving secret:", err);
 throw err;
 }
}

exports.handler = async (event) => {
 let api_key;
 try {
 const secrets = await getSecret('MyPaymentGatewayAPIKey'); // Name of your secret in Secrets Manager
 api_key = secrets.API_KEY; // Assuming your secret stores a JSON object like {"API_KEY": "your_actual_key"}
 } catch (err) {
 return { statusCode: 500, body: JSON.stringify('Failed to retrieve secret.') };
 }

 // Use api_key here
 console.log("Using API Key:", api_key ? "Key Retrieved!" : "Key Missing!"); 

 return {
 statusCode: 200,
 body: JSON.stringify('Function executed successfully!')
 };
};

Make sure your Lambda’s execution role has permissions like secretsmanager:GetSecretValue for the specific secret.

2. Rigorous .gitignore and Deployment Exclusions

This sounds basic, but it’s still a huge source of leaks. Ensure your .gitignore file is comprehensive. Don’t just exclude node_modules. Exclude .env files, any local configuration files that might contain secrets, and temporary build artifacts.

Beyond .gitignore, most serverless frameworks (like Serverless Framework or SAM) allow you to explicitly define files and folders to exclude from your deployment package. Use these features! For instance, in serverless.yml:


package:
 exclude:
 - .env
 - config.local.js
 - tests/**
 - README.md

This ensures that even if a local config file somehow makes it past your .gitignore during a manual build, the framework will prevent it from being deployed.

3. Audit Your Build Process

This is where that e-commerce startup went wrong. If you’re using bundlers (Webpack, Rollup, Parcel) or transpilers (Babel, TypeScript), understand how they handle environment variables. By default, they might not inject them. But if you or a team member added a plugin or configuration that does, it’s a significant risk.

  • Review Webpack configurations: Look for webpack.DefinePlugin or similar plugins that might be replacing process.env.YOUR_KEY with its literal value.
  • Understand CI/CD pipelines: Ensure your CI/CD scripts aren’t accidentally copying sensitive files into the build directory before zipping.

A simple check: deploy a test function with a dummy key and then download its deployment package (if your cloud provider allows, like downloading a Lambda ZIP). Unzip it and search for your dummy key. If you find it, you have a problem.

4. Principle of Least Privilege for IAM Roles

This isn’t directly about preventing keys from leaking into artifacts, but it’s crucial for overall security. If your function does need to access a secret from a secrets manager, grant it only the specific permissions it needs for that specific secret. Don’t give it secretsmanager:* or access to all secrets. This minimizes the blast radius if your function’s role is ever compromised by a bot.

Example (AWS IAM Policy for a Lambda Role):


{
 "Version": "2012-10-17",
 "Statement": [
 {
 "Effect": "Allow",
 "Action": [
 "secretsmanager:GetSecretValue"
 ],
 "Resource": [
 "arn:aws:secretsmanager:REGION:ACCOUNT_ID:secret:MyPaymentGatewayAPIKey-??????",
 "arn:aws:secretsmanager:REGION:ACCOUNT_ID:secret:AnotherServiceAPIKey-???????"
 ]
 }
 ]
}

Notice the specific ARNs and the wildcard at the end (which AWS appends to secret names). This policy only allows getting the value of those two specific secrets.

Actionable Takeaways for BotSec Readers

Look, the “serverless revolution” is great, but it doesn’t mean we can slack on security. Bots are out there, relentlessly poking at every exposed surface. Don’t let your API keys be one of them.

  1. Move ALL API keys and sensitive credentials to a dedicated secrets management service. This is non-negotiable for production environments.
  2. Review your build and deployment processes. Understand exactly what goes into your serverless deployment package. Download it, unzip it, and search for sensitive strings.
  3. Harden your .gitignore and deployment exclusion rules. Be explicit about what *doesn’t* get deployed.
  4. Implement the principle of least privilege for your function’s IAM roles. Only grant access to the specific resources and actions absolutely necessary.
  5. Educate your team. Share these common pitfalls with your developers. A collective understanding of these risks is your best defense.

The goal isn’t just to keep human attackers out, but to build systems that are resilient against the automated, relentless probing of bots. A leaked API key is a bot’s dream come true, giving them direct access to your services, data, and even payment systems. Let’s not make it easy for them.

Stay vigilant, stay secure, and I’ll catch you next time here on botsec.net.

🕒 Published:

✍️
Written by Jake Chen

AI technology writer and researcher.

Learn more →

Related Articles

Browse Topics: AI Security | compliance | guardrails | safety | security
Scroll to Top