Showing posts with label Microsoft Cognitive Services. Show all posts
Showing posts with label Microsoft Cognitive Services. Show all posts

Wednesday, December 4, 2019

Create PowerApps for Face Detection using Microsoft Azure Cognitive Services and Power Automate

This article explains how to handle the azure cognitive service APIs within Microsoft Flow(power automate) and use the flow from PowerApps. Microsoft Flow team has released new connectors for Azure cognitive service API which are in preview now. It includes Computer Vision and Face API, still I would go with http connector which is tested one.
Each connector has a different set of actions. We can use those actions by passing the proper input to the connections.
Requirements
  1. Face API URL & Key
  2. On-Premises Data Gateway – SQL Server
  3. Microsoft Flow – Free subscription or O365 subscription
Creating Face API\
To create a Face API, you need an Azure Subscription. If you don’t have a subscription, then you can get a free Azure subscription from here.
Visit portal.azure.com and click “Create a Resource”.
Under new, choose “Ai + Machine Learning” -> Face


Create a new face resource by providing the required details.

Once the resource is created, you need to get the key and URL (EndPoint).
Note down the endpoint and key and we will use it on Microsoft Flow.
PowerApps
Sign in to your PowerApps account and click Canvas App from Blank. Choose the Phone from Factor option and provide a name to your app and then click create


Once you created the app, Click Media under insert tool bar and then Insert the Camera on the screen to take the picture.

You will be able to see the number zero in camera property function bar. Which shows only rear camera.

So in order to use the front camera we need to create a toggle button. Insert toggle button from controls and in OnChange property paste this code UpdateContext({EnableFront:!EnableFront})


Now Click the camera and change its camera property to If(EnableFront=true,1,0) and change the camera OnSelect property to ClearCollect(capturedimage,Camera1.Photo)

We have created the camera and now we need insert the image from media tab so that we can see our captured image. Once image has been inserted change its OnSelect property to First(capturedimage).Url.
Now we need to create a flow so click action tab and click flows you will get a window which will show create a new flow. Click that. A new page will be opened and you can see the flow. Click next step and create SharePoint file to store our input image. you need to add the Site Address, Folder Path, File Name. Moreover, you need to send the actual image file from Power Apps with the name of Create_FileContent

In the next step, we need to pass it to the Compose component for the aim of store it in a variable ands also pass it to the HTTP component.

Here in the ComposeComponent, we have to convert the picture to the binary format using the function dataUriToBinary(triggerBody()[‘Createfile_FileContent’]) to do that, first click on the input then in the Expression search for the function dataUriToBinary then for the input choose the Createfile_FileContent.

In next step, we are going to pass the binary file (picture) to a component named HTTP. This component is responsible for calling any API by passing the Url, Key and the requested fields. Choose a new action, and search for the http and select it

In http component, choose the Post for the Method, Url”https://westcentralus.api.cognitive.microsoft.com/face/v1.0/detect
for headers:
Ocp-Apim-Subscription-Key: put the API key from Azure
Content-Type: application/octet-stream
then we need to provide the Queries: 
the first attribute is
returnFaceAttribute: that need to return below components from a picture:
age,gender,emotion,smile,hair,makeup,accessories,occlusion,exposure,noise

Now we need another component to be able to pass the result to Power Apps, as a result. In the new action search for the response, then for status code select 200, for Body choose the Body, and for the Response Body JSON Schema paste the below codes

Code: {
“type”: “array”,
“items”: {
“type”: “object”,
“properties”: {
“type”: “string”
“faceId”: {
}
“type”: “object”,
,
“faceRectangle”: {
“top”: {
“properties”: {
“type”: “integer”
“type”: “integer”
}
,
“left”: {
}
}
,
“width”: {
“type”: “integer”
,
}
“height”: {
“type”: “integer”
} }
“type”: “number”
, “faceAttributes”: {
“type”: “object”,
“properties”: { “smile”: {
“age”: {
} ,
“gender”: {
“type”: “string”
“properties”: {
} ,
“type”: “integer”
}
“emotion”: {
,
“contempt”: {
“type”: “object”, “anger”: {
“type”: “integer”
} ,
,
“type”: “number” }
“disgust”: {
,
}
“type”: “integer”
}
“fear”: { “type”: “integer”
,
} “happiness”: {
}
“type”: “number” ,
“neutral”: {
“type”: “number”
}
“sadness”: {
,
,
“type”: “integer”
“properties”: {
“surprise”: {
“type”: “number”
}
,
} }
“type”: “object”,
“exposure”: {
}
“exposureLevel”: {
“type”: “string”
}
“value”: {
,
}
“type”: “number” }
“value”: {
,
“noise”: {
“type”: “object”,
“properties”: {
“noiseLevel”: {
“type”: “string” }
“type”: “boolean”
, “type”: “number”
}
} } ,
“properties”: {
“makeup”: { “type”: “object”,
,
“eyeMakeup”: { }
,
“lipMakeup”: {
“type”: “boolean”
}
,
} }
“type”: “array”
“accessories”: { }
}
“occlusion”: { “type”: “object”,
“foreheadOccluded”: {
“properties”: {
,
“type”: “boolean” }
“type”: “boolean”
“eyeOccluded”: { ,
}
“mouthOccluded”: {
“type”: “boolean”
}
,
} }
“type”: “object”,
“hair”: { “properties”: {
“type”: “number”
“bald”: {
“properties”: {
,
“invisible”: {
“type”: “boolean”
}
“hairColor”: {
,
“items”: {
“type”: “array”,
“color”: {
“type”: “object”,
“required”: [ “color”,
“type”: “string”
}
“confidence”: {
,
}
“type”: “number” }
}
, “confidence”]
“faceAttributes”]
}
}
}
} } }
“faceRectangle”,
,
“required”: [ “faceId”,
Now we just need to save the Flow. The flow is created, now we need to connect it to the Power Apps. Go back to PowerApps and Add the button by clicking the        Insert-> Button and connect the created flow by clicking Action -> Flows and select the created flow and paste the below code and ClearCollect(facedata,yourflowname.Run(First(capturedimage).Url)). Enter your flow name in the mentioned place.
Next step is to create a gallery by clicking Insert -> Gallery -> Blank vertical
It adds a big on to the screen, resize it to be fit for the image cover. There are two things we need to do: first, add the data source to it. we want the gallery able to detect the faces, so we need to add the result of face recognition to the gallery. Under properties you can see items which has drop down in that select facedata and we linked the data.
Now we need to insert the rectangle for face detection. Rectangle can be inserted by Insert -> icons -> rectangle. Now, need to change the properties of it and to make it as an unfilled rectangle. click on the rectangle then change the more bored size to be bigger. you also able to change the color and so forth, However, the rectangle still is not dynamic, it always located in the top of the window, and if you run the code by clicking on the top of the page still you not able to see the rectangle. To set a rectangle around the face of the people in the image we need to align it by setting the parameters
first the OnSelect:  as you can see in the picture for Onselect attribute the formula is Select(Parent)
the next parameter  need to set is about the location of the rectangle
click on the Height Parameter then write the below codes
ThisItem.faceRectangle.height*(Image1.Height/Image1.OriginalHeight)
do the same for width
ThisItem.faceRectangle.width*(Image1.Width/Image1.OriginalWidth)
or you can put for X
ThisItem.faceRectangle.left*(Image1.Width/Image1.OriginalWidth)
and for the Y value
ThisItem.faceRectangle.top*(Image1.Height/Image1.OriginalHeight)
To create a interactive app change the image Height, Width, X, Y to Camera1.Height, Camera1.Width, Camera1.X, Camera1.Y respectively and mode the Image1 to top.
Now we are going to put some label into the page to show the age, gender, expression and so forth.
to show this information insert a new lable to the page.
“Age:”& Gallery1.Selected.faceAttributes.age
then now add other attributes like gender, Happiness, Neutral and so forth.
like happiness
“Happiness:”& Gallery1.Selected.faceAttributes.emotion.happiness
“Gender:”& Gallery1.Selected.faceAttributes.gender
To show the hair color we can create a data table by clicking on Insert-> Data Table in the item attribute of Data Table, write the below code
Gallery1.Selected.faceAttributes.hair.hairColor
To delete the photo Insert -> icons -> Trash and change the OnSelect to UpdateContext({conShowPhoto:false})
Now click the preview button and take photo and check the app and delete the photo and play again.

Sunday, June 4, 2017

Face API Using Microsoft Cognitive Services

The Face API which is a part of the Microsoft Cognitive Services helps you to identify and detect faces. It is also used to find similar faces, verify images to see if it’s of same persons .In this blog post, I’ll just use the detect service which detects faces and shows the gender , age, emotions and other data of the face.

Prerequisites: Create the Face API Service in Azure


As all Microsoft cognitive services, you can also create the face API service in Azure via the portal. It is part of the “Cognitive Services APIs”, so just search for it and create it. 



Select the Face API as the type of the cognitive service and check the pricing options:



Now, we need to note down the API key and the API endpoint url. Navigate to the service in the Azure portal and you will see the endpoint url in the overview. It’s currently only available in west-us, so the endpoint url will be: https://westus.api.cognitive.microsoft.com/face/v1.0
The keys can be found in “Keys” – just copy one of them and you can use it later in the application:



Using the Face API with C#.Net

The face API can be accessed via C# with a simple HttpClient or with the NuGet package Microsoft.ProjectOxford.Face. My first sample will use the HttpClient just to show how it works. It also returns by sure all data that is currently available. The NuGet package is not fully up to date, so it for example does not contain the emotions.

Access Face API with C# and HttpClient

In the following sample, I’ll just send an image to the face API and show the JSON output in the console. If you want to work with the data, then you can use Newtonsoft.Json with JObject.Parse, or as already stated, the NuGet package which is described later in this post


using System;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Threading.Tasks;
namespace MyAzureCognitiveService.Face
{
    class Program
    {
        private static string APIKEY = "[APIKEY]";
        static void Main(string[] args)
        {
            Console.WriteLine("Welcome to the Azure Cognitive Services - Face API");
            Console.WriteLine("Please enter image url:");
            string path = Console.ReadLine();
             
            Task.Run(async () =>
            {
                var image = System.IO.File.ReadAllBytes(path);
                var output = await DetectFaces(image);
                Console.WriteLine(output);
            }).Wait();
             
            Console.WriteLine("Press key to exit!");
            Console.ReadKey();
        }
        public static async Task<string> DetectFaces(byte[] image)
        {
            var client = new HttpClient();
            client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", APIKEY);
            string requestParams = "returnFaceId=true&returnFaceLandmarks=true&returnFaceAttributes=age,
gender,headPose,smile,facialHair,glasses,emotion";
            string uri = "https://westus.api.cognitive.microsoft.com/face/v1.0/detect?" + requestParams;
            using (var content = new ByteArrayContent(image))
            {
                content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
                var response = await client.PostAsync(uri, content);
                var jsonText = await response.Content.ReadAsStringAsync();
                return jsonText;
            }
        }
    }
}


For testing, I have used the below image and output given below.



output.json

[{
        "faceId": "c41cd9de-76c8-4f10-b6f5-d01bb08ec616",
        "faceRectangle": {
            "top": 332,
            "left": 709,
            "width": 48,
            "height": 48
        },
        "faceLandmarks": {
            "pupilLeft": {
                "x": 723.6,
                "y": 344.7
            },
            "pupilRight": {
                "x": 744.2,
                "y": 346.3
            },
            "noseTip": {
                "x": 732.8,
                "y": 357.6
            },
            "mouthLeft": {
                "x": 720.7,
                "y": 365.6
            },
            "mouthRight": {
                "x": 743.7,
                "y": 367.1
            },
            "eyebrowLeftOuter": {
                "x": 715.8,
                "y": 341.8
            },
            "eyebrowLeftInner": {
                "x": 728.3,
                "y": 341.2
            },
            "eyeLeftOuter": {
                "x": 720.4,
                "y": 345.1
            },
            "eyeLeftTop": {
                "x": 723.3,
                "y": 344.5
            },
            "eyeLeftBottom": {
                "x": 723.3,
                "y": 345.8
            },
            "eyeLeftInner": {
                "x": 726.3,
                "y": 345.5
            },
            "eyebrowRightInner": {
                "x": 738.2,
                "y": 342.2
            },
            "eyebrowRightOuter": {
                "x": 752.0,
                "y": 342.8
            },
            "eyeRightInner": {
                "x": 740.5,
                "y": 346.3
            },
            "eyeRightTop": {
                "x": 743.6,
                "y": 345.7
            },
            "eyeRightBottom": {
                "x": 743.3,
                "y": 347.1
            },
            "eyeRightOuter": {
                "x": 746.4,
                "y": 347.0
            },
            "noseRootLeft": {
                "x": 730.5,
                "y": 346.3
            },
            "noseRootRight": {
                "x": 736.4,
                "y": 346.5
            },
            "noseLeftAlarTop": {
                "x": 728.3,
                "y": 353.3
            },
            "noseRightAlarTop": {
                "x": 738.3,
                "y": 353.7
            },
            "noseLeftAlarOutTip": {
                "x": 726.2,
                "y": 356.6
            },
            "noseRightAlarOutTip": {
                "x": 739.8,
                "y": 357.7
            },
            "upperLipTop": {
                "x": 733.0,
                "y": 365.1
            },
            "upperLipBottom": {
                "x": 732.7,
                "y": 366.4
            },
            "underLipTop": {
                "x": 731.7,
                "y": 370.6
            },
            "underLipBottom": {
                "x": 731.4,
                "y": 373.1
            }
        },
        "faceAttributes": {
            "smile": 1.0,
            "headPose": {
                "pitch": 0.0,
                "roll": 3.2,
                "yaw": -0.5
            },
            "gender": "male",
            "age": 33.6,
            "facialHair": {
                "moustache": 0.0,
                "beard": 0.2,
                "sideburns": 0.2
            },
            "glasses": "ReadingGlasses",
            "emotion": {
                "anger": 0.0,
                "contempt": 0.0,
                "disgust": 0.0,
                "fear": 0.0,
                "happiness": 1.0,
                "neutral": 0.0,
                "sadness": 0.0,
                "surprise": 0.0
            }
        }
    }
]



It seems I look like a 33-year-old man and that my face on the image is the pure happiness (100%). All other emotions are non-existent (0%).
Access Face API with C# and the NuGet package
As already mentioned, there is the NuGet package Microsoft.ProjectOxford.Face which makes it very easy to access the face API. Unfortunately, it does not wrap all properties (emotions), but there are already some commits in the GitHub project (https://github.com/Microsoft/Cognitive-Face-Windows) that will fix that.
Detect faces
This is nearly the same sample as above, but this time I’ll use the NuGet package. As already mentioned, the package does currently not contain the emotions, that’s why I’ll just show the smile factor


using Microsoft.ProjectOxford.Face;
using System;
using System.Collections.Generic;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Threading.Tasks;
namespace MyAzureCognitiveService.Face
{
    class Program
    {
        private static string APIKEY = "[APIKEY]";
        static void Main(string[] args)
        {
            Console.WriteLine("Welcome to the Azure Cognitive Services - Face API");
            Console.WriteLine("Please enter image url:");
            string path = Console.ReadLine();
            Task.Run(async () =>
            {
                var faces = await DetectFaces(path);
                foreach(var face in faces)
                {
                    Console.WriteLine($"{face.FaceAttributes.Gender},
{face.FaceAttributes.Age}: Smile: {face.FaceAttributes.Smile}");
                }
            }).Wait();
            Console.WriteLine("Press key to exit!");
            Console.ReadKey();
        }
        public static async Task<Microsoft.ProjectOxford.Face.Contract.Face[]> DetectFaces(string path)
        {
            var client = new FaceServiceClient(APIKEY);
            using (System.IO.Stream stream = System.IO.File.OpenRead(path))
            {
                var data = await client.DetectAsync(stream, true, true, new List<FaceAttributeType>()
                {
                    FaceAttributeType.Age,
                    FaceAttributeType.Gender,
                    FaceAttributeType.Glasses,
                    FaceAttributeType.Smile
                });
                return data;
            }
        }
    }
}


In this post, I used the detect service, but the face API using cognitive service has much more functionality.