microservices

State of Microservices: Are You Prepared to Adopt?

Webcast replay

The allure of microservices is clear: shorter development time, continuous delivery, agility, and scalability are characteristics that all IT teams can appreciate. But microservices can increase complexity and require new infrastructure—in other words, they can lead teams into uncharted territory.

Join Gartner’s Anne Thomas and Google Cloud’s Ed Anuff as they present an in-depth look at the state of microservices.

They discuss:

  • what a microservice is and what it isn’t
  • trends in microservices architecture
  • the relationship between microservices, APIs, and SOA architecture
  • connecting, securing, managing, and monitoring microservices

Watch the webcast replay now.

Grow Bigger by Thinking Smaller: Getting Started with Microservices

How to clear security, visibility, and dependency hurdles when implementing microservices

It sounds contradictory, but if your enterprise plans to scale in today’s digital-first world, it’s time to start thinking smaller.

Today, many of the most innovative enterprises are scaling up their applications by breaking them into smaller pieces. This approach to IT architecture—microservices, as it’s commonly known—is a way of restructuring applications into component services that can be scaled independently (depending on whether a team needs more compute resources, memory, or IO), and then having them talk to each other via API service interfaces.

Using microservices, companies reap not only the benefits of agility and speed when building software, but also the ability to easily share and reuse services across the enterprise and beyond. In effect, these smaller services make it possible to achieve both simplicity and complexity at the same time.

According to one recent survey of over 1800 IT professionals, nearly 70% of organizations are either using or investigating microservices, with nearly one-third of organizations using them in production. At Netflix, one of the earliest adopters of microservices, roughly 30 independent teams have delivered over 500 microservices. Amazon, another long-time champion of microservices, has employed the technique to ensure effective communication within teams and enable hundreds of code deployments per day. Numerous other examples, from the open-source Kubernetes project to the Walgreens digital platform strategy, speak to this growing momentum.

But just as microservices present new opportunities for organizational efficiency and growth, they also pose common stumbling blocks—chief among them security, usage and performance visibility, and agility/reuse.

Security: Managing microservices in a zero-trust environment

The microservices architectural model has been both successful and challenging—for many of the same reasons. In essence, developers often build APIs and microservices without the kind of centralized oversight that once existed, and then they deploy them more widely than ever. This can lead to inconsistent levels of security—or no security at all.

When developers deploy microservices in the public cloud and neglect to deploy common API security standards or consistent global policies, they expose the enterprise to potential security breaches. Companies therefore must assume a zero-trust environment. As research firms have noted, a well-managed API platform can help enterprises overcome these threats by enabling the implementation of security and governance policies like OAuth2 across all of their microservices APIs.

Reliability: Delivering performance and enforcing SLAs

Microservices involve building dependencies among your software, which means all of your microservices depend on all the rest of them. By extension, it also means there are interdependency problems not unlike those that exist for SOA.

There are many ways to stress-test the reliability of microservices infrastructure, but visibility is one of the best. Which services are talking to which other services? Which ones are dependent on which other ones? These are important questions to answer—especially when microservices are used by disparate teams in a large enterprise, or by partners and customers.

Echoing the previous section, one way to answer these questions is to implement a management platform for microservices APIs. API management platforms provide the analytics and reporting capabilities that enable enterprises to measure microservices’ usage and adoption, developer and partner engagement, traffic composition, total traffic, throughput, latency, errors, and anomalies.

Armed with this information, companies can iterate quickly, reinforcing components with promising usage trends and fixing interdependency problems as they’re identified. This speed and agility are important: stress-testing and optimization can cause a company to lose momentum as it examines unlikely theoretical scenarios—which is deeply problematic, given that for many enterprises, microservices and APIs are valuable because they can dramatically shorten a new service’s time to market.

With real-time insight into API behavior, companies can balance speed, scale, and reliability by launching new services, collecting analytics, and implementing a broad range of improvements after only a few weeks of development sprints.

Adaptability: Building agile microservices for clean reuse

Many existing and legacy services are not built for modern scale. Consequently, many enterprises are replacing monolithic applications in favor of microservices that adapt legacy resources to modern architectures. In most cases, however, many applications take advantage of services from the monoliths. This means the transition from monolith to microservices must be done in a way that makes it a seamless proposition—in other words, it should be invisible to those other applications and developers using the monolith services.

Furthermore, microservices are typically purpose-built for particular use cases. But as soon as a microservice is shared outside the “two-pizza team,” developers need the ability to adapt it for wider use. And what’s a service that’s meant to be shared and reused across teams and even outside of your company? It’s an API.

An API platform serves as an API facade, delivering modern APIs (RESTful, cached, and secured) for the legacy SOAP services of the monolith apps, and exposing the new microservices. This makes it possible for mobile and web app developers to continue consuming an enterprise’s services without needing to worry about the heterogeneous environment or any transitions from monolith app to microservices by the service provider.

The way forward

As microservices become increasingly popular throughout the enterprise, more and more of them are being shared—both internally and externally. And when it comes to sharing services, it comes down to APIs.

As a result, companies are increasingly looking to API management platforms to provide the security, reliability, visibility, and adaptability they need to properly run microservices architecture. Also known as “managed microservices,” this deployment model provides enterprises with a single window for managing all microservices APIs across microservices stacks and clouds—and it’s transforming enterprises far and wide.

Image: Wikimedia Commons

Tutorial: Deploying Apigee Edge Microgateway

In a previous post, we discussed some of the features of Apigee Edge Microgateway and the power of hybrid API management.

Here, we’ll walk you through tutorials to deploy Apigee Edge Microgateway as a Docker container, in PaaS platforms like Cloud Foundry, and in cloud-native PaaS platforms like Google App Engine (GAE) and Azure App Services.

Recommended prerequisites

Before you adopt any of these deployment options, there are some steps to complete first:

  1. Configure Microgateway on a VM or host outside of the intended deployment pattern. This will produce a configuration YAML file that will be used in all of the following deployment options. The configuration file is of the format: {orgname}-{env}-config.yaml
  2. Enable plugins as necessary in the YAML file. Configure and set other parameters as necessary (log levels and connection settings, for example).
  3. Develop custom plugins as npm modules. Installation of npm modules can be done via a public npm repo (npm.org) or a private npm repo.
  4. Fork Apigee Edge Microgateway in GitHub for Azure App Services. It’s available on GitHub here. Some cloud vendors (such as Google) even provide local repositories (in which case you can load a clone of the microgateway project).
  5. Edit the config YAML to expose just a set of API proxies. For more information, check out this documentation.

Build a Docker image for Microgateway

In this section we’ll show you how to build a Docker image for Microgateway.

Step 1: Clone the project


git clone https://github.com/srinandan/apigee-edgemicro-docker

Step 2: Switch the directory


cd apigee-edgemicro-docker

Step 3: Copy the {org}-{env}-config.yaml file to the current folder and edit the Dockerfile with the correct file name (see the prerequisites).

Step 4: Build the Docker image


docker build --build-arg ORG="your-orgname" --build-arg ENV="your-env"
--build-arg KEY="bx..xxx2" --build-arg SECRET="exx..x0" -t microgateway .

Step 5: Start Microgateway


docker run -d -p 8000:8000 -e EDGEMICRO_ORG="your-orgname" -e
EDGEMICRO_ENV="your-env" -e EDGEMICRO_KEY="bxx..x2" -e
EDGEMICRO_SECRET="ex..x0" -P -it microgateway

The default path for microgateway logs are in /var/tmp. You might want to consider mounting a volume to this folder so the logs are accessible from outside the Docker image.

Microgateway on Google App Engine

Here we’ll walk you through deploying Microgateway as an app on Google App Engine (GAE).

Step 1: Fork or clone the Apigee Edge Microgateway GitHub repo (this is optional).

Step 2: Clone the forked (or main) repo in the gcloud shell.


git clone https://github.com/apigee-internal/microgateway.git
cd microgateway

Step 3: Copy the {org}-{env}-config.yaml file to the microgateway/config folder. 

Step 4: Review the app.yaml file.


# [START runtime]
service: microgateway
runtime: nodejs
env: flex
automatic_scaling:
 min_num_instances: 1
 max_num_instances: 2
resources:

cpu: 1

memory_gb: 2

 disk_size_gb: 10
env_variables:
 EDGEMICRO_KEY: 'bx..x2'
 EDGEMICRO_SECRET: 'ex..x0'
 EDGEMICRO_CONFIG_DIR: '/app/config'
 EDGEMICRO_ENV: 'env-name'
 EDGEMICRO_ORG: 'org-name'
# [END runtime]

Review the following fields:

  • The min and max instances (for auto-scaling)
  • Resources (cpu, memory)
  • Microgateway environment variables (key, secret, org and env)

Step 5: Deploy the app to GAE


gcloud app deploy --project your-project-name

Microgateway on Azure App Services

Here we'll walk you through how to deploy Microgateway as an app on Azure’s App Services platform.

In the Azure portal, perform the following steps:

Step 1: Click on “App Services.”

Step 2: Click on “+ Add.”

Step 3: Search for “node.js”and  select “API App.”

and click “Create.”

Step 4: Enter application details.

Step 5: Click on "Application Settings."

Step 6: Add the environment variables required for Microgateway.

Step 7: Save the settings (key and secret are obtained when Microgateway is configured to the org and env).

 

Step 8: Fork the Apigee Microgateway repo. Set up deployment option (for example, GitHub) and point it to the Microgateway repo.  

Step 9: Enter authentication details to the repo.

Step 10: Ensure the deployment is successful.

Microgateway on Cloud Foundry

In this section, we'll show you how to deploy Microgateway as an app on Cloud Foundry (get all the details in Pivotal’s documentation and GitHub).

Step 1: Fork the Apigee Edge Microgateway GitHub repo (this is optional).

Step 2: Clone the forked (or main) repo


git clone https://github.com/apigee-internal/microgateway.git

cd microgateway 

Step 3: Copy the {org}-{env}-config.yaml file to the microgateway/config folder.

Add the “cloud-foundry-route-service” plugin to the config file if it doen’t exist in the plugin sequence.


edgemicro:
 port: 8000

max_connections: 1000
 …
 plugins:
   sequence:
     - oauth
     - cloud-foundry-route-service

Step 4: Review the manifest.yml file


---
applications:
- name: edgemicro
 memory: 512M
 instances: 1
 host: edgemicro
 path: .
 buildpack: nodejs_buildpack
 env: 
   EDGEMICRO_KEY: 'bx..x2'
   EDGEMICRO_SECRET: 'ex..x0'
   EDGEMICRO_CONFIG_DIR: '/app/config'
   EDGEMICRO_ENV: 'env-name'
   EDGEMICRO_ORG: 'org-name'

Review the following fields:

  • Instances (for auto-scaling)
  • Memory (min: 512M)
  • Microgateway environment variables (key, secret, org, and env)

 Step 5: Deploy the app to Cloud Foundry


cf push

Step 6: Review the logs

If your Cloud Foundry instance doesn’t internet access (to download npm modules), you must follow the instructions for using the Node.js buildpack in a disconnected environment here.

Apigee Microgateway is a great choice for microservice developers and teams when they want to add API management features as close to their microservices as possible (to reduce latency), and do so natively (with no additional skills required) to the microservices platform.

Questions, comments, or observations? Join the conversation on the Apigee Community.

 
 
 
 

 

Deploying Microgateway in Docker and PaaS

How to add API management capabilities natively on the microservices stack of your choice

A lot of enterprises are exploring microservices as an architecture pattern for building or exposing new APIs. Often, a microservices strategy includes an infrastructure stack with components like Docker, Cloud Foundry, Kubernetes, and OpenShift, or cloud-native PaaS platforms like Google App Engine (GAE) and Azure App Services. Apigee Edge provides API management capabilities for microservices deployed in such an infrastructure stack. 

In this post, we’ll explain the power of Apigee Edge Microgateway and the options for deploying it. In an upcoming installment, we’ll walk you through a handful of quick tutorials to get you started deploying Microgateway as a Docker container, in PaaS platforms like Cloud Foundry, and in GAE and Azure App Services. 

These options help microservices developers and teams add API management capabilities natively on the microservices stack of their choice.

What is Apigee Microgateway?

What is Apigee Edge Microgateway, you ask? It’s a secure, HTTP-based message processor for APIs. Its main job is to process requests and responses to and from backend services securely while asynchronously pushing API execution data to the Apigee Edge API platform, where it’s consumed by the Edge analytics system.

Edge Microgateway is easy to install and deploy—you can have an instance up and running within minutes.

Typically, Edge Microgateway is installed within a trusted network, in close proximity to backend target services. It provides enterprise-grade security, and some key plug-in features including spike arrest, quota, analytics, and customer extensions, but not the full capabilities or footprint of Apigee Edge. You can install Edge Microgateway in the same data center or even on the same machine as your backend services.

For a detailed explanation of how to install, setup, and use Apigee Edge Microgateway, check out this documentation page.

The power of hybrid API management

Microgateway provides the user the ability to perform hybrid API management, which enables a user to:

  • Centrally define/author API proxies
  • Centrally define API products, developer apps, and developer catalogs, among other things
  • Distribute policy enforcement of API proxies on many gateways, which can be deployed on customer data centers or other cloud providers
  • Centrally collect and view API analytics

 

Multi-cloud deployment options

Microgateway provides the user with the ability to leverage cloud-native deployments across major cloud providers.

In the next post, we'll offer some tutorials to help you get started with various deployment options for Apigee Edge Microgateway.

Simplifying Microservices Management

How Apigee and Istio can bring APIs and microservices together

If you base your IT technology strategy on what you read in the blogs, then you are already building your entire technology stack as a collection of microservices, built by “two-pizza teams,” running in containers, and deployed to the cloud using a container orchestration product like Kubernetes. There are a lot of good reasons to adopt such an architecture, from agility to resilience.

But if you’ve actually tried to follow through on all this, you probably discovered that it’s harder than it sounds. (For instance, what if you have team members who can’t or won’t eat pizza?)

But between pizza breaks, your teams are probably asking how a service in one container can locate another service, since the containers will be coming up and down all the time. Of course, there are facilities in Kubernetes for this, but they’re not enough.

What if one of the microservices is unresponsive from time to time? How do all the API calls between microservices remain encrypted in transit using the proper protocols and cipher suites? How do you control which microservices are authorized to talk to others, while preventing insiders from directly accessing sensitive data from their command line?

Building and deploying microservices is hard

After contemplating these problems and others, we realized that building and deploying microservices the right way is a lot of work.

That’s why, when Apigee joined Google last November, we were happy to learn that Google not only had solved many of these problems for its own services, but that Google has been working on an open source project aimed at solving these same problems for the rest of the microservices world.

So, we’re excited that today Google, along with IBM and Lyft, is announcing Istio, an open source project designed to ease the pain around connecting and securing a network of microservices.

Within Google, the Apigee team already runs a diverse set of services on a variety of platforms, and we’re expecting to deploy more. Istio will help us solve the very problem it’s being advertised to solve—making a mesh of microservices secure and reliable. But we think there’s a lot more to this project.

Most Apigee customers are talking about microservices, and many of our customers are adopting them. We expect that the presence of Istio in the marketplace will give them a great framework for building those networks of services. So we won’t be surprised when our customers ask us, “Will Apigee work with Istio?”

The answer is “yes,” and we’re working with the Istio community on exactly that today.

Microservices require API management

We feel that Apigee and Istio are a great fit. Istio is built with containers and microservices management in mind. The Apigee Edge API platform provides common visibility and management across both APIs and microservices for organizations of any size.

For instance, within a single Kubernetes cluster—and even with Istio helping mediate—an unreliable or slow microservice can drag the SLA of an entire application down along with it.

The kinds of sophisticated analytics that the Apigee platform provides can help administrators and product managers see these kinds of issues and react to them before it’s too late.

Once services are used beyond a single team and outside a single cluster, a different set of API management capabilities are necessary. For instance, by enforcing API quotas, an API product team can help control how much load a particular team can place on the whole collection of microservice.

Apigee is used by many organizations to enforce these types of quotas, allowing API teams to dynamically adjust how much API load is consumed by each organization who uses an API.

And, when services are exposed outside the corporation, capabilities like security based on OAuth, intelligent threat detection, and “bot” detection become important. Often, services exposed outside the organization won’t be adopted unless the API team uses a platform like Apigee Edge to enable developers to learn about and access APIs quickly via self service.

In short, we feel that the microservices movement is creating an explosion in the number of APIs in the world—and in the end, that makes API management tools even more important.

For that reason, we’re excited to be working on integrating our suite of API management tools with Istio—including our open source, software-as-a-service, and on-premises products.

 

Managing the Complexity of Microservices Deployments

Webcast replay

To rapidly deliver microservices to production, organizations are turning to infrastructure automation provided by a cloud-native platform, like Cloud Foundry. With a platform in place, every microservice team will have what they need to create a CI/CD pipeline that safely delivers applications to a production environment. The final ingredient for success is knowing the right patterns for connecting microservices together over HTTP using REST APIs.

In this webcast replay, Kenny Bastani from Pivotal and Prithpal Bhogill from Apigee dive into a reference architecture that demonstrates the patterns and practices for securely connecting microservices together using Apigee Edge integration for Pivotal Cloud Foundry.

This session covers:

  • basics for building cloud-native applications as microservices on Pivotal Cloud Foundry using Spring Boot and Spring Cloud Services
  • patterns and practices that are enabling small autonomous microservice teams to provision backing services for their applications
  • how to securely expose microservices over HTTP using Apigee Edge for PCF

 

Gamesys: From Monolith to Microservices

Webcast replay

You’ve heard the microservices success stories. You’ve heard about organizations that have scaled 50 services up and down seamlessly and deployed each in minutes. You, on the other hand, are too nervous to touch any part of your huge legacy codebase, in case it crumbles to the floor.

How do monoliths really get split up into microservices? How do real companies succeed with this architecture?

In this webcast replay, architects at Gamesys share their first-hand experiences and discuss:

  • the whys and why nots of microservices
  • the importance and effects of API governance and standards
  • the most effective team structure
  • tips for splitting monoliths
  • DevOps processes and API design employed by Gamesys

Microservices: Easy in Theory, but not Reality

Enterprise CTOs have microservices on the mind. After relying for years on monolithic stacks, these IT leaders are trading in their old architectures for independently deployable systems and granular, lightweight services.

Speed and agility are what these CTOs are after, and microservices can deliver—phenomenally, in fact. Microservices can empower companies to leave behind large, slow-moving teams and complex deployments, and to shift to agile, independent teams delivering on their own cadences. The opportunity is huge.

But while adopting microservices is easy in theory, practical reality is often less straightforward. Microservices resources and tools have grown abundant, and CTOs can look to success stories ranging from Netflix's architecture to the open-source Kubernetes project to Brazilian retailer Magazine Luiza's digital reinvention. This variety makes it easy to think, "I can do that."  

This is often where the trouble starts.

What makes a microservices architecture so powerful is also what can make it challenging for large enterprises to adopt: it’s not just another technology but rather an entirely new model that can impact virtually all aspects of business operations.

I’ve seen many companies navigate this transition, and, when it breaks down, it is often for the same reasons. The difficult and surprising parts don’t always involve technology—they often involve cultural and organizational change.

Typically, if a large enterprise is going to implement microservices the right way, it needs to first transform the heart of its organizational culture. This challenging prerequisite can be traced to the fact that microservices are largely designed for (and by) small teams. There’s a certain organizational inertia within larger enterprises that can be at odds with this granular approach.

Solving three common challenges

Speed versus uniformity

The first stumbling block large enterprises frequently run into when adopting microservices is balancing independence versus control. There is a tendency to say, “Do what is right for you,” and to then follow it up by asking, “How many vendors and skills should we manage?” This can happen because large organizations tend to be prescriptive by default. While this is not necessarily a bad thing, it relegates true microservices to the realm of theoretical discussion.

Ultimately, there is no silver bullet. But when in doubt, it is often beneficial to favor speed over uniformity—within bounds. (Do you really need three NoSQL databases? Not likely.) If your microservices are wrapped with clean APIs (see the third point, below), even better—uniformity can always come later. After all, with clean APIs, implementations can change.

The dreaded DevOps requirement

Another organizational challenge large enterprises often have to overcome is resistance to the additional skills and duties microservices impose on teams—namely, the dreaded DevOps requirement.

If Team A builds a microservice, then it is truly their job to keep it up and running. Unfortunately, DevOps is not an acquired skill. It is not even a learnt skill, so to speak. Unlike programming languages, for instance, DevOps is an “experience” skill—one that only comes with the passage of time.

Two practical design principles can help you prevail: (a) invest in tools that simplify operations, and (b) for some really difficult things (such as managing a NoSQL database like Cassandra), have a separate team that can focus on those skills specifically.

Chaotic contracts

A third common issue relates to implementation. Devising and holding a contract with downstream teams is very different than walking over to the other team and hashing out the special features they need.

In the first scenario, you have a well-chosen contract (manifested ind a well-documented API) that does not strive toward special interests of the “loudest” downstream team. The contract is therefore 1:N, and with N teams, there are O(N) contracts. In the second, you end up getting extensions that are bespoke and brittle. The contract is 1:1, and therefore, with N teams, we get O(N^2) contracts. For an organization with 1,000 services, that is a difference between 1,000 and 1,000,000 interfaces!  

While the theory is perfectly clear (“fewer is better”), organizational dynamics can prevent its application. Enterprises facing this problem should explore an “open to all” approach to APIs.

All APIs and their documentation should be published, and all interactions between microservices should go through a registration process. As long as there are no side contracts, having too many APIs shouldn’t be a problem—periodic pruning based on observation of use can prevent any issues that crop up from getting out of hand. Openness with periodic reviews should be all you need.

More than a tech implementation

Organizational challenges aside, microservices are likely here to stay. This architecture reflects the way software should be written. But to do it right, leadership should first understand that adopting microservices involves more than a mere technology implementation. In the end, it is an organizational change that gets down to the very fundamentals of independence and empowerment.

This post originally appeared on CIO.com.

Image: Flickr Creative Commons/Zhang Wenjie

Microservices Done Right

Webcast replay

What does it mean to be doing microservices right?

Seventy percent of organizations claim to be using or investigating this new trend because it's hard to resist the promise of faster innovation and the ability to independently develop, deploy, and scale components of large applications.

But, challenges exist—both known and unknown.

In this webcast replay, we identify key ingredients of microservices success. We discuss:

  • why microservices is taking off
  • challenges faced with microservices
  • how companies are using API management to solve their challenges

 

 

Walgreens: Expanding Customer Loyalty with Microservices

Not too long ago, if a big brand wanted to implement an omnichannel customer rewards program, it was a little like gathering 1,000 or so of your closest friends for a particularly epic and elaborate mannequin challenge video—it was logistically daunting, and beholden to so many moving pieces that a hiccup or proposed change anywhere in the chain could bring the whole thing crashing down.

Needless to say, that old approach limited what brands could do, and who was willing to partner with them. With its microservices strategy, powered by Apigee Edge, Walgreens is showing that those limits no longer apply.

Microservices break up the complexity of a full, rich application into a series of discrete services connected through APIs. This approach enables different teams to move at different speeds without affecting one another. It also standardizes communication between software elements and promotes reusability of resources, among other benefits.

“Microservices are a new way of thinking,” Walgreens developer evangelist Drew Schweinfurth told us recently, noting that the approach helped the drugstore giant make its “services as light as possible and easy to develop on.”

The benefits of Walgreens’ approach to microservices and APIs are clear. The company now works with more than 275 partners. Its prescription refill API, which lets developers integrate Walgreens prescription services into their apps, fills a prescription every second. Partner integrations that would previously have taken months now take as little as hours.

Breaking down Balance Rewards

To explain the benefits of microservices, Schweinfurth singled out the company’s Balance® Rewards program. Launched in 2014, it enables third-party apps to connect to Walgreens customer data and award individual customers Walgreens rewards points for various activities, such as walking or running. As of April, the service had attracted more than 800,000 users across 250,000 connected devices, and doled out nearly 2 billion rewards points.

With Balance® Rewards, Walgreens wanted to simplify all the “moving pieces” at work in its loyalty system, Schweinfurth said.

“Let’s just scrap that whole idea,” he said of the old architecture. “Let’s build an OAuth login that allows customers to login through third-party applications,” and lets developers use an API to “make POST requests on behalf of the activity data that’s happening inside of the third party’s application, in turn giving that customer reward points for allowing that connection to happen.”

The end result: If a customer connects, say, a step-counting app to her Walgreens account, that customer will earn rewards points as she walks or runs. For the customer, it’s an incentive to be healthy and a reward for being a Walgreens customer. For the brand, it’s a way to improve experiences for both customers and partners while expanding Walgreens’ digital presence into other companies’ products and online experiences.

The key, Schweinfurth said, is that even though an app’s overall customer experience is rich, individual services are “tiny.”

“Send over my step data. Send over my blood glucose or my blood pressure. And then [give me] rewards,” he said. “[That] is how we look at building a microservice.”