Azure Cognitive Services Deploy to a Container

You have tested the container locally, which is great for development. Now you need to decide how to deploy it. You have the choice to deploy as a stand-alone Docker container or within a Kubernetes environment. Whichever mode you choose, you still have the option to deploy on-premises or on Azure. In this section, we will focus on what these deployment options look like within Azure.

Deploy to Azure Container Instances

Azure Container Instances is a serverless environment to run the containers. Here you can run containers on-demand with minimal setup. However, you will want to configure one or more of the security options: Azure Firewall, TLS/SSL support, and running with a managed identity.

Running containers within Azure cloud Container Instances is common for independent containers.
It allows running your container with minimal setup and you only pay for containers that are running. However, there is not much coordination with other resources, such as additional containers that connect to it. For deployments with multiple containers working together, the preferred option is Kubernetes.

Deploy to Azure Kubernetes Service

Running containers within Azure Kubernetes Service is a common path for applications that have multiple moving parts and components. Kubernetes is popular because it enables scripted deployments and easy scaling of containers. Organizations maintain control over the full application environment and deploy the containers together. The file defining your deployment is called a Kubernetes manifest. Kubernetes runs the containers described in the manifest file and self-heals if container failures occur.

container-deploy-azure

Publish to Azure Container Registry

You will need to access your image from a repository whether running in Kubernetes or Docker. Azure Container Registry is a private repository for your container images. You will publish the image you built here so it can be accessed by your deployment process. It supports role-based access, which you use to ensure the image is only available to the correct people and services. You can access images stored in the registry using your Azure credentials or from another Azure service.

Next you will deploy the language detection component of your health care application.

 

Exercise: Deploy a container

For the health care application, you would deploy a language detection container in coordination with other containers. For example, one container may be the customer facing web app and another would run the model to recommend treatments. Since multiple containers will be deployed together, you will choose Kubernetes. Remember that the application is designed to work on-premises or in Azure, so Kubernetes fits the need. You will do the task in two parts, which are detailed below: publish a container image and deploy to Azure Kubernetes Service.

Publish Docker images to Azure Container Registry

For the first step in deployment, you will tag the Docker image and publish to Azure Container Registry.

  1. Use Azure CLI to log in and retrieve the Container Registry address by replacing <acr_name> and <resource_group> with your resource names.
    az acr login --name <acr-name>
    
    az acr list --resource-group <resource-group> --output table

The output of these commands includes your login server name, in this case cogsvcacr.azurecr.io

TABLE 1
NAME RESOURCE GROUP LOCATION SKU LOGIN SERVER
cogsvcacr cognitive-services-demo-rg westus2 Basic cogsvcacr.azurecr.io

Next, find the correct docker image to publish. Use the image you created in the previous section, named cog-svc-language.

docker images

 

REPOSITORY TAG IMAGE ID
cog-svc-language latest 49379c513da1

3. Tag the docker image by adding the login server and push the tagged image to the Azure Container Registry. The operation may take several minutes. Be sure to replace and with the correct values.

docker tag <service-image-name> <login-server>/cog-svc:v1
docker push <login-server>/cog-svc:v1

 

Deploy container to Azure Kubernetes Service

To deploy to Kubernetes, you need a manifest file. This file defines the desired deploy state, including which image, port, billing endpoint, and API key will be used. In a complete project, the file will include additional resources that interact with your Cognitive Services container. However, for this example you will deploy the single Cognitive Service container and use it from your local application.

    1. Create the manifest file by copying following YAML and save as cog-svc.yml.
apiVersion: v1
kind: Service
metadata:
name: cog-svc
spec:
selector:
app: cog-svc
type: LoadBalancer
ports:
- name: cog-svc
port: 5000
targetPort: 5000
protocol: TCP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cog-svc
spec:
replicas: 1
selector:
matchLabels:
app: cog-svc
template:
metadata:
labels:
app: cog-svc
spec:
containers:
- name: cog-svc
image: <login-server>/cog-svc:v1>
ports:
- name: public-port
containerPort: 5000
livenessProbe:
httpGet:
path: /status
port: public-port
initialDelaySeconds: 30
timeoutSeconds: 1
periodSeconds: 10

automountServiceAccountToken: false 
  1. In the cog-svc.yml file that was created, replace the value for image with the full name of the tagged cloud container.

Install kubectl, the command-line interface for working with Kubernetes.

az aks install-cli

 

You will authenticate to Azure Kubernetes Service before deploying the containers. Azure CLI provides a way to store Kubernetes Service credentials by appending to the file /.kube/config. Retrieve and store the credentials for the Azure Kubernetes Service with the following command, replacing and with your values.

az aks get-credentials --resource-group <resource-group> --name <aks-name>

Run the deploy with the apply command.

kubectl apply -f cog-svc.yml

If your output does not show that the application was created, check that your credentials are set and that your network location is allowed to access the Kubernetes service.

To test out the service, you need the public IP address. Run the following command to confirm the service is running and retrieve the public IP.

kubectl get service cog-svc

7. A simple test from the browser will confirm you can access the service and run API commands using the swagger UI. Open your browser to http://:5000/swagger to test it out.ontainer-deploy-swagger

In this simple deployment, we included a public port, which makes this container accessible to everyone. This is not how you will want to deploy a long running container. You will learn about securing the container and deployment in the next section.

What are the best hosting service provider standards ?

These are the most important standards for choosing the best hosting sites for WordPress or any  website projects

Best web hosting company standards

  1. Servers with uptime guarantee: The closer you get to 100%, the better. Any server stop will stop your site, lose your visitors and google ranking
  2. Good price with great free service:  Some companies reduce accommodation prices and reduce some free features and turn them into payment, so watch out for the trap of my visiting brother’s hosting company!
  3. Strong and collaborative technical support: “Support Services Good”: Without a doubt, excellent technical integration will solve your future problems.
  4. Technical specifications of the hosting plan“Best Plans for the Hosting”: Each hosting company has its own plan, so you have to compare them, in particular: bandwidth or data volume (bandwidth), disk storage space (dedicated to hard drive size), SSD type or hard drive; ram speed, unlimited number of electronic accounts, daily or weekly backups of all your websites, each hosting different plans, so it’s best to choose a host that meets your web needs.
  5. Free domain name “Free Domain”: dot com for at least one year.
  6. Transfer your sites to new host servers for free: they provide great security and ensure that your site is not compromised when you go from server to server.
  7. Servers locations: If most of your visitors come from America: Make sure the server location and data center are available in the United States. If most of your visitors are in the Arab world, it is best to choose the server location in Europe.
  8. types of hosting: There are free, shared, cloud-cloud, VPS and your own server
  9. Upgrade to Your Plan: Check if your host can update or reduce your hosting plan as needed.
  10. Easy control panel and attractive interface: like cPanel.
  11.  User Reviews: It is important to know whether this company is the best company or should we look at the opinion of the experimenterin in front of you.
  12. Renewal rates: When your subscription expires, the company will increase your subscription price, or keep it as it is. Therefore, the first discount will be for you to eat the taste of your ignorance.
  13. The number of sites they are allowed: The higher the number, the better, especially for strong plans.
  14. Money back guarantee: You can subscribe to a company or plan, which you find wrong and want to exclude. The longer the recovery period, the better.

Summary
In our estimation we will help you as much as possible choose the cheapest and best plans,And we will give the best company at the cheapest price.

When purchasing a host, you may not look at the cheapest hosting without regard to the quality of their services, so you can buy the best WordPress hosting in addition to buying the cheapest hosting together.

InnerSource program – Defining workflows & Measuring program success

Defining workflows

For projects that encourage external contributions, be sure to specify what workflow the project follows. The workflow should include details about where and how branches should be used for bugs and features, how pull requests should be opened, and any other details people outside the repository team should know before they push code. If you don’t yet have a workflow in mind, you should consider the GitHub flow .

You should communicate a strategy for managing releases and deployments. These parts of the workflow will impact day-to-day branching and merging, so it’s important to communicate them to contributors. Learn more about how they relate to your Git branching strategy .

Measuring program success

Any team venturing into InnerSource should think about the kinds of metrics they want to track to gauge the success of their program. While traditional metrics like “time to market” and “bugs reported” are still applicable, they aren’t necessarily going to illustrate the benefits achieved through InnerSource.

Instead, consider adding metrics that show how external participation has improved project quality. Is the repository receiving pull requests from external sources that fix bugs and add website hosts features? Are there active participants in discussions around the project and its future? Is the program inspiring an InnerSource expansion that drives benefits elsewhere in the organization?

In short, metrics are hard, especially when it comes to measuring the value and impact of individual and team contributions. If misused, metrics can harm the culture, existing processes, and diminish the collective sentiment towards the organization or leadership team. When thinking about measuring InnerSource adoption, consider the following:

  • Measure process, not output
    • Code review turnaround time
    • Pull request size
    • Work in progress
    • Time to open
  • Measure against targets and not absolutes
  • Measure teams and not individuals
    • Number of unique contributors to a project
    • Number of projects reusing code
    • Number of cross-team @mentions

Learn about the successes others have enjoyed in these InnerSource case studies .