top of page
  • Writer's pictureOscar Fraxedas

I needed to add OCR capabilities to an application I've been developing for a while. A while back I saw an azure computer vision demo, and I wanted to try it out.

Their API is quite simple and well documented:

There’s also nugget package which will speed up your development time:

I was able to build a proof of concept in an hour. It was nothing fancy, just a Xamarin Forms app that captures a photo using the and sends the stream to Azure for OCR.

You can find the code in GitHub:

The application looks like this:

230 views0 comments
  • Writer's pictureOscar Fraxedas

The story of a mediocre demo

I recently made a demo of an Android app I was working on. My setup was pretty simple:

- I used Vysor share the phone screen with the laptop

- I connected my laptop to the office projector

Things were surprisingly well from a technical point of view, which doesn't happen often. The laptop behaved, the projector didn't fail, and the network didn't hiccup.

On the other hand, my demo wasn't clear, some of the gestures and taps were hard to notice. The touches on my phone weren't being displayed in the projector, so people watching didn't get any feedback.

A solution already exists in Android. There's an option to show taps, and this is how you enable it:

- Open the Settings app.

- Scroll to the bottom and select About phone.

- Scroll to the bottom and tap Build number 7 times.

- Return to the previous screen to find Developer options near the bottom

- Open Developer Options

- Enable Show taps to display taps when you touch the screen

Now whenever I capture a video or share my screen in a demo, viewers will understand what I'm doing.

This is how the demo will look:

115 views0 comments
  • Writer's pictureOscar Fraxedas

I'm currently working in a mobile application developed in Xamarin Forms.

There were some signs of memory issues and I finally decided to try Xamarin Profiler.

You'll need to be a Visual Studio Enterprise subscriber to use it, even a trial version will work.

After spending 5 mins reading the documentation and 2 mins running the application with the Profiler I was able to spot an issue: I was creating too many byte arrays.

Locating my code in the stack trace was simple enough.

This is what the function looked like:

public async Task<ImageSource> Adjust(Bitmap bitmap){

var stream = new MemoryStream(); await bitmap.CompressAsync(Bitmap.CompressFormat.Png, 100, stream); var tempFile = Path.GetTempFileName(); var tempFileSteam = new FileStream(tempFile, FileMode.Create); await tempFileSteam.WriteAsync(stream.ToArray(), 0, stream.ToArray().Length); retrun ImageSource.FromFile(tempFile);


I created a memory stream to hold a bitmap, I converted it to a byte array twice, before sending it to a file stream, without disposing of it.

The fix was very simple: I used a file stream to hold the bitmap within a using statement

The new function looks like:

public async Task<ImageSource> Adjust(Bitmap bitmap){

var tempFile = Path.GetTempFileName(); using (var stream = new FileStream(tempFile, FileMode.Create))

await bitmap.CompressAsync(Bitmap.CompressFormat.Png, 100, stream);

return ImageSource.FromFile(tempFile);


A new execution of the Profiler revealed that the problem with the byte arrays is gone.

Being able to tackle this small issue in a couple of hours is encouraging.

If you want to learn more about Xamarin Profiler visit:



68 views0 comments
bottom of page