API Security

Apigee Edge for Private Cloud 4.19.01 Is Here!

We’re excited to announce the general availability of Apigee Edge for Private Cloud 4.19.01. This release gives our customers even more flexibility to manage their APIs with features like Open API 3.0 support (OAS 3.0), self-healing with apigee-monit, TLS security, virtual host management improvements, and additional software support. Most notably, we are making the New Edge experience generally available to all customers.

The New Edge experience sits on top of the same platform that powers the "classic" Edge experience but adds several enhancements, particularly in the areas of API design and publishing. You’ll also notice an updated, modern look across all parts of Apigee Edge.

The New Edge experience supports full backwards compatibility; your current API proxies and applications will continue to work, so no migration is required. You can easily switch back to classic view in the UI.

Some of the features we’ve released are only available in the New Edge experience, including:

  • Virtual hosts management in the UI
  • OAS 3.0 support

With this release, Apigee Edge Monetization, which was previously only in the classic UI, is now generally available in the New Edge experience.

SAML Single-Sign-On, the recommended way for securing modern enterprise apps, is required in the New Edge experience.

Virtual hosts management enhancements

With the New Edge experience, you can now easily create, update, and delete virtual hosts from the UI itself rather than through the command line interface.



Open API Specification 3.0 support

API proxies can now be created from OAS 3.0 (the most recent spec) in the proxy wizard.

Self-healing and monitoring

Several of our customers have told us about the benefits they’ve experienced from using Apigee in conjunction with Monit, an open-source process supervision tool. These benefits include increased uptime and resilience of the overall system.

To better enable all our customers to use Monit, we’re launching apigee-monit, which adds self-healing capabilities to help ensure all Apigee Edge components remain up and running all the time. It does this by providing the following services:

  • Restarting failed services
  • Displaying summary information
  • Logging monitoring status
  • Sending notifications
  • Monitoring non-Edge services

Apigee-monit architecture

TLS improvements

We’ve added more TLS configuration options to provide our customers with more choices in selecting the protocol and cypher suites that best fit their needs. These options now include TLS protocols (default 1.2) and TLS cipher parameters.

Software support

This release adds support for RHEL 7.6, CentOS 7.6, and Oracle Linux 7.5. The full list of supported versions can be found here. Apigee recommends running the Apigee stack on the latest OS releases.  

Other improvements

  • New best practices for Private Cloud deployment on AWS around networking and Cassandra.
  • Org validation is enforced in setup script with an optional silent config parameter SKIP_ORG_NAME_VALIDATION=y

How to upgrade

We encourage you to upgrade to this new release to start benefiting from the added features, UI enhancements, and bug fixes. You can update Apigee Edge versions 4.17.09, 4.18.01 and 4.18.05 to 4.19.01 directly. If you have a version that is older than 4.17.09, you must first migrate to version 4.18.01 and then update to the latest version 4.19.01.

Here are some links to get you started:

If you’re new to Apigee Edge for Private Cloud, you can install a new instance by following the installation instructions.

There’s a lot more to share than what we’ve covered here; additional details can be found in our release notes. Visit the Apigee Community to ask questions, leave feedback, or start a conversation.

The Year Ahead in APIs

APIs have come a long way from the arcane geek-speak of software interfaces popularized by Win32 APIs (still among the most commonly searched API phrases). Today, APIs represent interfaces between businesses and large swaths of internal enterprise services and business units. APIs not only connect software to software but also help to create entire commercial ecosystems, and so have become integral parts of how enterprises conduct business.

Google Cloud's Apigee team has watched the API space evolve over the last decade or so, and we believe that the onwards and upwards march of “APIfication” within enterprises will take a few surprising and not-so-surprising twists and turns this year. Below are some of our predictions around the use and impact of APIs in 2019.

API standards

As APIs rapidly become the “contracts” between software systems within and outside of an enterprise, it seems natural that these contracts should get standardized. Shouldn’t there be one way to call an API to make a payment? Check a balance? Order a ticket? However, such standards have proven elusive because developers who write programs that call these APIs have largely been comfortable in tailoring their code to various API providers.

In 2019, we believe that the momentum will begin to shift toward greater API standardization in the following areas.


Hype surrounding GraphQL will accelerate, and GraphQL will be positioned as the first technology to solve the long-standing challenges around delivering reusable services and APIs. GraphQL will be most popular for first-party APIs that are intended to be used by mobile and single-page apps, although some API providers will also adopt it as an option, alongside conventional “REST-ish” APIs. We anticipate that as GraphQL’s popularity grows, there will be a vocal community that becomes highly critical of OpenAPI and argues that it is a legacy standard analogous to WSDL in the era of SOAP.


We predict that gRPC will continue to see adoption in 2019 due to its importance within the Kubernetes ecosystem, and many microservices will be gRPC-based. Most APIs intended for third-party developers will continue to be REST-based, but API providers will start to offer gRPC as an option, particularly for high-throughput and low-latency scenarios.

APIs in industries: mandates, competition, and standards

For open banking and PSD2 adherence, financial services regulators have mandated APIs as a way to spur competition and foster innovation among banks. We expect to continue to see this trend sweep the globe as more countries require banks to give third-party providers access to customer information and the ability to initiate payments via APIs, and we expect to see regulators specify standards for these APIs.

While banks have little choice but to comply with these mandates, we’ve seen new industry-generated API specifications such as the Durable Data API (DDA) or the BIAN standards emerge. Banks are starting to view APIs as a way to compete in an increasingly digital world, instead of seeing them as simply a regulatory compliance issue.

We believe that in 2019, new banking API standards will emerge and will quickly gain traction because they are focused on helping banks acquire customers, lend more, and build software faster to compete in the API economy. We expect to see such standards emerge in other industries as well, presenting organizations with the challenge of deciding which standards to support.  

There is a second trend that we believe will force the standards issue. APIs will increasingly be called by machines, which are often sitting behind web properties such as Google Search. These machines and these web properties will demand programmatic integrations, and this can only happen when some standards emerge. Schema.org is a good place for some of these standards to emerge (e.g., in parcel delivery), though others are entirely possible. We expect, in 2019, dozens of these specifications to emerge, often in the OpenAPI spec, which will give canonical call/response structures for various verticals.

The rise of machine and AI-driven API traffic

Today, most API traffic can be attributed to some human action. A consumer browses a product—a few API calls are made. A homeowner pays a utility bill—another few API calls. When traffic is machine generated, on the other hand, it can be malicious, comprising bots, or security breaches. We expect this to continue. Whether it’s for crypto-mining or for credential stealing, we expect APIs to continue to bear the burden of machine-driven traffic, which will burden backends unless the right security is built into APIs.

That said, we are beginning to see benign programmatic API calls, generated by algorithms or machine intelligence, take off. This is driven by several trends:

  1. The rise of voice applications Voice, in the end, needs to be heavily AI/ML driven—so a call such as “Pay my bill” needs to be understood as “Pay my utility bill from PG&E for the current month using my stored credit card.” Simple requests into the voice system result in hundreds of API calls at the backend, all driven by machine intelligence figuring things out.
  2. The rise of IoT and home automation At the recent CES conference, communicating devices were everywhere. They integrated with one another and with voice assistants through APIs, and through recipes such as IFTTT. With hundreds of thousands of different types of devices, bespoke integrations just do not work; APIs simplify the mix and match, though they don’t necessarily alleviate the need for some deeper business logic.
  3. AI going mainstream AI is only useful when it can be leveraged into applications. However, not every team, or every enterprise, has the capability to do AI from scratch. We will see API-driven AI, where one team, or one business, builds a very good model in some domain, and other teams leverage that work through APIs. These teams might build their own AI models, which, in turn, another team might leverage. We are already seeing examples of this—like Google’s AutoML for image and text analysis, but we expect this trend to accelerate.

API-driven ecosystems

We’ve noticed that enterprises have begun to understand the importance of developers. Of the top 100 domains (defined as those with the highest number of pages that appear in a sample of 10 billion domains on the web), 94% had some developer-facing property, and among those 94%, 100% were offering APIs in the Swagger framework or the OpenAPI spec.

Developer offerings will become more prevalent and more API-centric

While 94% of the top 100 domains offer something for developers, in the same sample, this number falls to 9.5% for the top one million domains. We expect this skew to become less pronounced as the importance of developers becomes well understood by a larger number of domains.

REST APIs will be specified by OpenAPI

We are already seeing a trend: OpenAPI specs (formerly known as Swagger) are becoming de jure standards for specifying APIs that enable developer self-service. Anecdotally, when we ask our customers if they use OpenAPI spec, the typical answer we get is, “of course!” In the analysis above, we only looked for Swagger or OpenAPI patterns, and even there, the percentage was very high.

API startups will proliferate

In the wake of Twilio and SendGrid, startups that provide infrastructure services via APIs will once again be considered viable investments for venture capitalists.  

Microservices and APIs

While APIs that drive ecosystems and are visible to the public drive a lot of press, a much larger number of APIs are to be found internally within enterprises as interfaces between software systems and teams. Many of these APIs will be called “microservices,” even though they do not fit any serious definition of “micro.”  

Envoy will be increasingly popular as the open source technology for APIs

With support from a lot of vendors, and with large enterprise implementations under its belt, Envoy is fast becoming the most popular choice for open source API gateways. We expect commercial offerings around Envoy continue to proliferate in 2019.

A majority of enterprises will view microservices as modernized SOA

Except for a small set of enterprises who are deep into cloud-native architectures, the majority of enterprises who say they are using microservices will in fact be using the term to describe internal APIs with lightweight governance. The result will be that microservices hype will increase considerably as vendors try to market their solutions to all possible microservices projects.

Microservices will continue to have lots of different recipes

Successful microservices architecture is complex to design, build, and manage. There is a lot of experimentation and iteration, but it is early days for microservices, and a proven recipe for success has yet to emerge. Key to realizing the promise and benefits of microservices architecture will be successfully designing and building reusable, decoupled services that deliver scalability and agility for the business, with the right level of governance and lifecycle management capabilities. The availability of appropriate tooling for supporting and debugging microservices architecture will be key to its success in the enterprise.  


APIs represent a way to access enterprise services. They are also therefore a convenient point of attack. While API vulnerabilities have garnered some attention, we believe that unsecured APIs will be a fresh vector of attack in 2019.

Breaches of APIs for crypto mining

The K8S API vulnerability has shown that using unsecured APIs as a vector for taking over container orchestration platforms can yield immediate financial gains. Every business that uses elastic cloud infrastructure can be a target for attacks that attempt to inject cryptomining code into their cloud workloads. This will catch many businesses who believe they haven’t put anything valuable in the cloud off guard when they find themselves paying for the compute bills generated by criminal crypto-miners.

Breaches because of poor API security

Developers have understood that their web sites are vulnerable to attacks, and the best practices for securing them have started to become more and more common. External APIs are still taking off, and the best practices are not that wide-spread. In 2019, we believe we’ll see at least three types of breaches due to poor API security. All API management vendors will have serious conversations about API abuse with all their top traffic customers. These abuses could include:

  • DDOS situations (high traffic rates) breaking API backends
  • Spam (APIs processing large amounts of junk content)
  • Credential abuse (Reusing credentials to break into protected APIs)


As APIs become mainstream, they offer an unprecedented opportunity to drive new business opportunities through ecosystems and new ways of rebooting enterprise architectures via microservices. APIs will support new formats, and some standardization will take root. Machine-driven API traffic (especially AI traffic) will become a new growth vector. Internal projects will leverage APIs, but dramatic new things will not happen. And security will need to be a continuous focus.

Happy 2019 from all of us in Google Cloud’s Apigee team!


4 Tips for Better API Security in 2019

Whether in the tech press or analyst reports, it became more common in 2018 to see the words “API” and “security”—or worse, “API” and “breach”—together in the same headline.

APIs are not only the connective tissue between applications, systems, and data, but also the mechanisms that allow developers to leverage and reuse these digital assets for new purposes. APIs factor into almost every digital use case, and their role in security news isn’t an intrinsic flaw in APIs any more than vaults are categorically flawed simply because some of them have been cracked.

But the headlines nevertheless reinforce an important message: if API security isn’t at the top of an enterprise’s 2019 priorities, that list of priorities is incomplete.

Continue reading this article on Medium.

Spideo: Personalizing Content with Apigee

Editor's note: Today we hear from Spideo vice president of engineering Randa Zarkik and Paulo Henrique, the company's integration engineer. The Spideo Personalization Platform provides unique API-based modular solutions to build recommendation features and smart data around content and users. Read on to see how Apigee provides Spideo's API program with security and simplicity—and even has helped the company with its sales efforts.

Spideo is an artificial intelligence-based recommendation and personalization platform that provides cost-effective tools for creative industries to build recommendations and smart data around users and content. Spideo is dedicated to the new generation of content apps and platforms. A driving force behind Spideo's recommendation platform is the use of natural language based on semantic metadata. The company works with all kinds of creative content, including videos, audio, images, and documents.

Trust, transparency, control

All of Spideo's features are API-based, which means that our API management platform is at the heart of our business. Our philosophy is that we built our features around three core values: trust, transparency, and control. We needed an API management platform that could support these pillars with its own features.

We wanted to find a more secure way to showcase our recommendation API to customers and prospects and expose all of our end points simply and cost effectively. We needed a solution that didn’t involve any participation from our seven-person developer team, as we’re a small group that needs to maintain our focus on developing recommendation algorithms. We were also looking for a platform that was straightforward enough that our integration engineer could do the whole job by himself, and that’s exactly what we got.

We chose Apigee from Google for its simplicity, enhanced security, reporting, and analytics, but also because of the great feedback from partners who had recommended the platform to us. We were able to set up security quickly and upload our existing API documents, which meant we didn't have to invest a lot of time in migrating to Apigee. The information and support that we received from Google’s Apigee team while setting up the environment has been excellent.

We’ve already seen significant ROI from Apigee. Prior to implementing the platform, Spideo spent an average of 15 to 20 developer days per year just on keeping API documentation up to date. Since transitioning to Apigee, we’re saving valuable developer time, and that’s only one aspect of the benefits we’re seeing.

Demo innovation boosts sales

We went live with Apigee in June 2018. When we chose the solution, we noticed that it was also a great tool for building confidence with our prospects and customers. We were inspired to do something innovative, using Apigee as a way to demo Spideo’s features, which impacts how we do sales at the company.

We set up a remotely accessible environment through the Spideo website where customers and prospects can access the recommendation API in Apigee. While they’re playing with the API, evaluating results, and testing response times, we’re able to track who they are and what they’re doing using Apigee analytics. This really aids the sales cycle because we can proactively address specific questions and concerns based on real user experiences. This has also been a great tool for lead generation; we’ve had some prospects turn into customers thanks to the platform.

Self-service saves time

At this point we have 34 companies registered in our Apigee demo environment. We offer a simulated content catalog that simulates user history, so customers can plug and play to test end points immediately after registering. This demo environment has not only created a lot of confidence with customers, but it also saves us a lot of time on support. With Apigee, our customers can integrate new features themselves just by going to the Apigee test site. They can see the returns and test outputs directly without support from our engineers. This saves time for both Spideo and our customers.

Recommending APIs

We’re very excited about Spideo’s innovation in AI-based recommendations. We’re on the leading edge of what’s being done with personalization for creative content, combining human expertise, content semantics, and AI. Having Apigee as our API management platform means that we can maintain our focus on building new algorithms that push the envelope rather than spend a lot of time and energy on managing our APIs.


TradeIt: API-First Online Trading with Apigee

Editor's note: Today we hear from Joel Hancock, head of product at TradeIt. TradeIt is dedicated to helping people stay in control and connected to their investments by building the underlying API infrastructure to link app developers with financial institutions. 

TradeIt is the leading API for online investing. We connect retail brokers to the TradeIt ecosystem and distribute our API to app developers who use it to enable their users to view portfolios and trade directly from brokerage accounts.

Most of the large retail brokers are integrated into our ecosystem. We've also just distributed our API into one of the largest financial media applications and we’re working closely with several others. We’re excited to be collaborating as well with Google for future integration with Google Assistant.

When we were getting ready to start building an API ecosystem for investing, we had to start thinking early about our API gateway, session management, our API interface, and the developer portal. Initially we started building every one of those on our own, which was a huge undertaking.

As we faced the challenges of scaling and growth, we realized that we should look at what was available on the market–that the smart move would be to leverage a best-in-class API solution instead of building everything in-house. We looked at a few solutions and found Apigee the best for several reasons. We especially appreciated Apigee technology partnerships and the overall functionality of the API developer portal.

At this point, we use Apigee internally in a proof of concept. We plan to go live in Q4 of this year. For us, as an API-first company, Apigee is more of an infrastructure adjustment than a real change in how we do business. We expect the changeover to be transparent to our users. Rather than relying on the switch to Apigee to focus on pushing more product-based features, we see the value because it's just more sustainable in the long term. We couldn’t possibly build, manage, and maintain the kind of feature set, security, stability, scalability, or innovations that the Apigee API management platform gives us.

We built the platform before implementing Apigee. I believe that if we had known that Apigee was out there, we would have started with it from the beginning, saving a lot of time and development costs in the process. Right now, we’re at the point in our migration where we’re seeing that Apigee does a great job at providing us analytics on our API usage, and we can already leverage the developer portal.

I think the main challenge for our team of 10 developers was figuring out how to integrate Apigee to match our business project. The Apigee platform was very helpful when it came to building our own API interface and what we expose to our developers.

We still have a lot of business logic that's very specific to the brokerage field that has to be integrated in the best possible way with Apigee, but we’ve been encouraged so far. Our model is a single API made from aggregating many different trading APIs, and people use us to streamline connecting with popular retail brokers. We’re working on building the logic to map between our APIs and the brokers’ APIs so that we can complete our migration to Apigee.

We’re excited about seeing significant ROI from Apigee soon!


Riding the Wave of Digital Disruption

5 tips to help legacy businesses operate like startups

Startups often face significant challenges. At a bare minimum, they need to define their value proposition, build a service, get funding, define a business model, drive sales, and recruit talent—all with a severely constrained staff. Many startups fail because of their inability to address any number of those challenges.

Despite these hurdles, many startups have a leg up over long-established competitors. Read the rest of this article on ProgrammableWeb to learn about five key enablers to help large enterprises overcome disadvantages they might face when compared to nimble digital natives. 

Best Practices for Building Secure APIs

Editor's note: API security remains a critical issue for our readers. For evidence, look no further than this article, the all-time most popular post on Apigee's "APIs and Digital Transformation" Medium publication. With that in mind, we reprise it here.

API (application programming interface) designers and developers generally understand the importance of adhering to design principles while implementing an interface. No one wants to design or implement a bad API!

Even so, it’s sometimes tempting to look for shortcuts to reach those aggressive sprint timelines, get to the finish line, and deploy an API. These shortcuts may pose a serious risk — unsecured APIs.

Developers should remember to wear the hat of an API hacker before deploying. If a developer neglects to identify the vulnerabilities in an API, the API could become an open gateway for malicious activity.

An API can work for or against its provider depending on how well the provider has understood and implemented its API users’ requirements. If a company builds an incredibly secure API, it might end up very hard to use. A fine balance needs to be struck between the purpose of an API and ease of consumption. In this article, we’ll explore some of the API vulnerabilities we’ve come across through our work as part of Google’s Apigee team, including how these vulnerabilities might have been prevented.

To continue reading, visit our Medium page

Apigee blog home page image: Simon Cocks/Flickr Creative Commons

Best Practices for Building Secure APIs

API designers and developers generally understand the importance of adhering to design principles while implementing an interface. No one wants to design or implement a bad API!

Even so, it’s sometimes tempting to look for shortcuts to reach those aggressive sprint timelines, get to the finish line, and deploy an API. These shortcuts may pose a serious risk — unsecured APIs.

Developers should remember to wear the hat of an API hacker before deploying. If a developer neglects to identify the vulnerabilities in an API, the API could become an open gateway for malicious activity.

Identifying and solving API vulnerabilities

An API can work for or against its provider depending on how well the provider has understood and implemented its API users’ requirements. If a company builds an incredibly secure API, it might end up very hard to use. A fine balance needs to be struck between the purpose of an API and ease of consumption. In this post, we’ll explore some of the API vulnerabilities we’ve come across through our work as part of Google’s Apigee team, including how these vulnerabilities might have been prevented.


APIs are the gateways for enterprises to digitally connect with the world. Unfortunately, there are malicious users who aim to gain access to enterprises’ backend systems by injecting unintended commands or expressions to drop, delete, update, and even create arbitrary data available to APIs.

In October 2014, for example, Drupal announced a SQL injection vulnerability that granted attackers access to databases, code, and file directories. The attack was so severe that attackers may have copied all data out of clients’ sites. There are many types of injection threats, but the most common are SQL Injection, RegEx Injection, and XML Injection. More than once, we have seen APIs go live without threat protection — it’s not uncommon.

APIs without authentication

An API built without protection from malicious threats through authentication represents an API design failure that can threaten an organization’s databases. Ignoring proper authentication — even if transport layer encryption (TLS) is used — can cause problems. With a valid mobile number in an API request, for instance, any person could get personal email addresses and device identification data. Industry-standard strong authentication and authorization mechanisms like OAuth/OpenID Connect, in conjunction with TLS, are therefore critical.

Sensitive data in the open

Normally, operations teams and other internal teams have access to trace tools for debugging issues, which may provide a clear view of API payload information. Ideally, PCI cardholder data (CHD) and Personal Health data (PHI) is encrypted from the point where data is captured all the way to where data is consumed, though this is not always the case.

With growing concerns about API security , encryption of sensitive and confidential data needs to be a top priority. For example, in June 2016, an http proxy vulnerability was disclosed that provided multiple ways for attackers to proxy the outgoing request to a server of choice, capture sensitive information from the request, and gain intelligence about internal data. Beyond using TLS, it’s important for API traffic to be protected by encrypting sensitive data, implementing data masking for trace/logging, and using tokenization for card information.

Replay attacks

A major potential concern for enterprise architects is the so-called “transaction replay.” APIs that are open to the public face the challenge of figuring out whether to trust incoming requests. In many cases, even if an untrusted request is made and denied, the API may politely allow the — potentially malicious — user to try and try again.

Attackers leverage this misplaced trust by attempting to playback or replay a legitimate user request (in some cases using brute force techniques) until they are successful. In 2016, hackers got into Github accounts via a playback attack by reusing email addresses and passwords from other online services that had been compromised and trying them on Github accounts.

Countermeasures include rate-limiting policies to throttle requests, the use of sophisticated tools like Apigee Sense to analyze API request traffic, and identification of patterns that represent unwanted bot requests. Additional security measures to stymie replay attacks include:

  • HMAC, which incorporates timestamps to limit the validity of the transaction to a defined time period
  • two-factor authentication
  • enabling a short-lived access token by using OAuth

Unexpected surges in API usage

It’s always tricky to estimate the usage of an API. A good example is the app that briefly brought down the National Weather Service API. This particular API didn’t have any kind of traffic surge prevention or throttling mechanism, so the unexpected surge in traffic directly hit the backend.

A good practice is to enforce an arrest in spike traffic or a per-app usage quota, so that the backend won’t be impacted. This can be easily rolled out with the help of a sophisticated API management platform with policies like quota and spike arrest.

Keys in URI

For some use cases, implementing API keys for authentication and authorization is good enough. However, sending the key as part of the Uniform Resource Identifier (URI) can lead to the key being compromised. As explained in IETF RFC 6819, because URI details can appear in browser or system logs, another user might be able to view the URIs from the browser history, which makes API keys, passwords, and sensitive date in API URIs easily accessible.

It’s safer to send API keys is in the message authorization header, which is not logged by network elements. As a rule of thumb, the use of the HTTP POST method with payload carrying sensitive information is recommended.

Stack trace

Many API developers become comfortable using 200 for all success requests, 404 for all failures, 500 for some internal server errors, and, in some extreme cases, 200 with a failure message in the body, on top of a detailed stack trace. A stack trace can potentially become an information leak to a malicious user when it reveals underlying design or architecture implementations in the form of package names, class names, framework names, versions, server names, and SQL queries.

Attackers can exploit this information by submitting crafted URL requests, as explained in this Cisco example. It’s a good practice to return a “balanced” error object, with the right HTTP status code, with minimum required error message(s) and “no stack trace” during error conditions. This will improve error handling and protect API implementation details from an attacker. The API gateway can be used to transform backend error messages into standardized messages so that all error messages look similar; this also eliminates exposing the backend code structure.

Keep APIs safe

As we have reviewed in this article, many potential threats can be avoided by putting some thought into API design and establishing governance policies that can be applied across the enterprise. It is important to guard APIs against malicious message content by accessing and masking sensitive encrypted data at runtime and protecting backend services against direct access. An API security mistake can have significant consequences — but with the right forethought and management, businesses can make themselves much safer.

This post originally appeared in Medium.

GDPR: Are You Ready?

On May 25, 2018, one of the most  significant pieces of European data protection legislation to be introduced in 20 years will come into force. The EU General Data Protection Regulation (GDPR) replaces the 1995 EU Data Protection Directive. The GDPR aims to strengthen individuals’ rights regarding their personal data and seeks to unify data protection laws across Europe, regardless of where that data is processed.

Apigee, which is part of Google Cloud, is committed to GDPR compliance across our API management services. We are also committed to helping our customers with their GDPR compliance journey by providing them with the privacy and security protections we have built into our services over the years.

Apigee Edge customers will typically act as the data controller for any personal data they provide in connection with their use of Apigee Edge. The data controller determines the purposes and means of processing personal data, while the data processor processes data on behalf of the data controller.

Our terms of service articulate our commitments to customers, and we are updating them to address GDPR changes and making those updates available to customers in the coming weeks.  

If you’re a data controller, you can familiarize yourself with and find guidance related to your responsibilities under the GDPR by regularly checking the website of your national or lead data protection authority (as applicable). You should also seek independent legal advice relating to your status and obligations under the GDPR. Bear in mind that nothing in this article is intended to provide you with, or should be used as a substitute for, legal advice.


Apigee Up Close: Protecting APIs with OWASP Best Practices

Webcast replay

Apigee Up Close is a webcast series featuring the Apigee Edge platform and live demos on select topics that users have been asking about.

Do you know how to protect your APIs from malformed client payloads? Do you have a solid grasp of how your application layer is exposing the underlying database?

In this webcast replay, you’ll learn how Apigee Edge can be configured to protect your APIs and backend resources from common OWASP security vulnerabilities and other threats.

We’ll look at an example that uses injection flaw and input validation protections to mitigate the risk of a malicious attacker compromising your API and backend resources.

When we’re done, you'll have a clearer picture of how to:

  • Create an API that protects against injection flaws and input validation from malicious clients
  • Keep it as simple as possible to make adoption easy
  • Report on usage

Register now for the next webcast in this series. It will help you build an API using composite resources.