Category

Development

iOS USB Data Transfer in a DIL Environment With Xamarin

By Development, Mobile, XamarinNo Comments

Transferring data across mobile devices can be a daunting problem when those devices are in a DIL (Disconnected, Intermittent, Low bandwidth) environment where you do not want your device to transmit signals that could be detected. For small data packages it may make sense to use QR codes to transfer data across devices that are not connected. However; this type of solution is not scalable if you have large packets of data that need to be transferred. For larger amounts of data a laptop could be used as an intermediary between two devices. With an Android device the simplest way to do this would be to program the app on the device to dump the data to be transferred in an external file located in “My Files.” The Android device could then be plugged into a laptop where the device would be seen as an external mass storage device and the data could easily be retrieved and then transferred to another Android device. Unfortunately it is not this simple with iOS.

Communicating With an iOS Application through USB

The iOS mobile operating system limits the way in which Apple mobile devices (and therefore the applications on those devices) can communicate via the physical USB/Lightning connection. The typical portal through which data is transferred via a USB connection from a computer to an iOS mobile device (and vice-versa) is through the iTunes desktop application. The iTunes application accomplishes this by connecting to the iOS device through usbmuxd. When the iTunes application is installed on a computer, a utility called usbmuxd (USB Multiplexing Daemon) is installed along with it. usbmuxd is a socket daemon that is started on Mac OSX by launchd (see /System/Library/LaunchDaemons/com.apple.usbmuxd.plist). It creates a listening UNIX Domain Socket at /var/run/usbmuxd. On Windows the service that hosts this program is named “Apple Mobile Device Service,” and can be seen in services.msc.

Can usbmuxd Be Leveraged By Third Party Apps?

usbmuxd is a socket daemon that listens for iOS device connections to a computer through its USB ports. When the usbmuxd detects an iOS device running in normal mode (as opposed to recovery mode) it will connect to it and then start relaying requests that it receives via /var/run/usbmuxd. This is the only way to connect directly to an iOS mobile device through a USB connection. This means if you want to create a direct line of  communication through USB between a desktop application and a mobile application, that connection must be made through usbmuxd. In order to establish this socket connection through usbmuxd some code needs to be implemented in both the desktop app and the mobile app. The sample code below was written by Carlos Rodriguez in his blog article “Communicating with your iOS app over USB (C# and/or Xamarin).”

How To Create a Connection Using usbmuxd

First the NuGet package iMobileDevice.Net needs to be added to the desktop application project. Here is some sample code using that package to listen for a usbmux “AddDevice” event:


Below, the Connect method gets a “DeviceHandle” reference using the UDID of the connected device. Once it gets that handle, it calls the “ReceiveDataFromDevice” method:

And finally, the below line sends data to the iOS device.


The iOS application would have to implement some code like the following in order to ‘listen’ for an incoming connection on port 5050 (an arbitrarily chosen number that matches the port number in the desktop app code).

Once the above code has been executed a connection that resembles something like a TCP network connection will be established between the MCC Utilities desktop app and the Communication Component inside the app on the iOS mobile device.

Conclusion

It is possible to create a direct connection between an iOS mobile application and a desktop application through USB by leveraging the usbmuxd socket daemon. Data can then be communicated back and forth through this connection in order to accomplish the prescribed data sync between the two applications. This connection will have been established through the usbmuxd socket daemon. This solution can be used to transfer data between iOS devices in a DIL environment with the desktop app as the intermediary.

How I Passed Scrum.org Scrum Master Certification

By Agile, DevelopmentNo Comments

Work in progress as of 3 June 2018… please don’t judge yet

Bother to become scrum certified with scrum.org? First and foremost, I believe in self reliance and not needing to go through “a course”.  Secondly, going through a certification process forces you to get uncomfortable and challenge what you know.  If you aren’t familiar with scrum.org vs scrum alliance, please check out our previous article to see why I chose scrum.org.

I have been through training years ago and ran several projects with ease.  I sat down and took this assessment after spending $150.

My Results

Name PSM I
Description Professional Scrum Master I Assessment

Thank you for taking the PSM I Assessment. We regret that you did not receive the minimum passing score of 85%. An e-mail contining your score will be sent to you.

Your result has been recorded and you can safely close your browser or return to Scrum.org by clicking the button below.

Scrum on, Ken Schwaber

Score NOT PASSED
57 points scored (or 71.3%) out of 80 maximum points

(a score of 85.0% or greater is needed to pass this test)

From Failure To Success

As a minor perfectionist, my first reaction was to go through the normal steps of failing.  Blame, sadness, self loathing, angry, and then acceptance.  This cycle took about 2 minutes and now I have to solve it.  But what is the best way to do that?  Read.

All Hail Chandini Paterson!

I have never met Chandini Paterson per se, but I loved one of his posts where he outlined what he studied to pass the certification test in a community forum.  I am going to outline those steps and put some time to them.  His study technique was as follows:

– Reading the Scrum Guide and understanding the concepts
– Taking the Scrum Open assessments.

Time Boxing Prep Work – The Plan

One of the most important things you can do is put a plan in place to execute.  Following the book “Deep Work”, am going to block off every morning for 90 minutes a day starting at 5am.  In 2 weeks time, I should be able to get in 10×1.5 minutes = 15 hours of prep time.  This seems a little extreme, but getting this certification would make that all the more worthwhile. The other options is to spend 4 hours commuting back and forth over two 9 hour days for a total of 22 hours.

The Scrum Guide

You can download it from either Bytelion or Scrum.org.

Total Pages = 19.  Total read time = ?

 

AWS Sagemaker – predicting gasoline monthly output

By Artificial Intelligence, AWS, Development, Python, Sagemaker

AWS continues to wow me with all of the services that they are coming out with. What Amazon is doing is a very smart strategy. They are leveraging their technology stack to build more advanced solutions. In doing so, Amazon Web Services is following the “Profit From The Core” strategy down to the t.  Aside from following Amazon’s world domination plan, I wanted to see how well their roll out of artificial intelligence tools, like Sagemaker, went.

Background

There are many articles about how AI works.  In some cases, an application is extraordinarily simple.  In other cases, it is endlessly complex. We are going to stick with the most simple model.  In this model, we have to do the following steps.

  1. Collect data
  2. Clean Data
  3. Build Model
  4. Train Model
  5. Predict Something

Amazon has tried to automate these steps as best as possible.   From Amazon’s site: “Amazon SageMaker is a fully-managed platform that enables developers and data scientists to quickly and easily build, train, and deploy machine learning models at any scale. Amazon SageMaker removes all the barriers that typically slow down developers who want to use machine learning.”

Lets see how well they do.  Gentle people…lets start our clocks.  The time is 20 May 2018 @ 6:05pm.

Notebook Instances

The first thing that you do as part of your training is build notebooks. According to Jupyter, the developer of Project Jupyter, a notebook is an application that allows you to create and share documents that contain live code, equations, visualizations and narrative text.

You follow the simple tutorial and it looks something like this.

AWS Sage simple Jupyter Notebook

Time: 6:11:34 (so far so good)

Example Selection – Time Series Forecast

The first thing that we want to do is go to the “SageMaker Examples” tab, and make a copy of “linear_time_series_forecast_2019-05-20”.  I have had some experience predicting when events would happen and wanted to follow something that I already know. If you aren’t familiar, please check out this coursera video.

Time: 6:20:17

Read Background

Forecasting is potentially the most broadly relevant machine learning topic there is. Whether predicting future sales in retail, housing prices in real estate, traffic in cities, or patient visits in healthcare, almost every industry could benefit from improvements in their forecasts. There are numerous statistical methodologies that have been developed to forecast time-series data. However, the process for developing forecasts tends to be a mix of objective statistics and subjective interpretations.

Properly modeling time-series data takes a great deal of care. What’s the right level of aggregation to model at? Too granular and the signal gets lost in the noise, too aggregate and important variation is missed. Also, what is the right cyclicality? Daily, weekly, monthly? Are there holiday peaks? How should we weight recent versus overall trends?

Linear regression with appropriate controls for trend, seasonality, and recent behavior, remains a common method for forecasting stable time-series with reasonable volatility. This notebook will build a linear model to forecast weekly output for US gasoline products starting in 1991 to 2005. It will focus almost exclusively on the application. For a more in-depth treatment on forecasting in general, see Forecasting: Principles & Practice. In addition, because our dataset is a single time-series, we’ll stick with SageMaker’s Linear Learner algorithm. If we had multiple, related time-series, we would use SageMaker’s DeepAR algorithm, which is specifically designed for forecasting. See the DeepAR Notebook for more detail.

Time: 6:24:13

S3 Setup

Let’s start by specifying:

  • The S3 bucket and prefix that you want to use for training and model data. This should be within the same region as the Notebook Instance, training, and hosting.
  • The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the boto regexp with a the appropriate full IAM role arn string(s).

I set up a simple s3 bucket like this: 20180520-sage-test-v1-tm

Import the Python libraries.

Got distracted and played with all of the functions.  Time 6:38:07.

Data

Let’s download the data. More information about this dataset can be found here.

You can run some simple plots using Matlab and Pandas.

Sage time series gas plots

 

Transform Data To Predictive Model

Next we’ll transform the dataset to make it look a bit more like a standard prediction model.

This stage doesn’t look immediately clear. If you were to just click through the buttons, it takes a few seconds. If you want to read through these stages, it will take you a lot longer. In the end, you should have the following files stored on S3.

Note, you can’t review the content from these using a text editor. The data is stored in binary.

Time: 7:02:43

I normally don’t use a lot of notebooks. As a result, this took a little longer because I ran into some problems.

Training

Amazon SageMaker’s Linear Learner actually fits many models in parallel. Each model has slightly different hyper-parameters. The model the best fit is the one used. This functionality is automatically enabled. We can influence this using parameters like:

  • num_models to increase the total number of models run. The specified parameters values, will always be in those models. However, the algorithm also chooses models with nearby parameter values. This is in case a nearby solution is more optimal. In this case, we’re going to use the max of 32.
  • loss which controls how we penalize mistakes in our model estimates. For this case, let’s use absolute loss. We haven’t spent much time cleaning the data. Therefore, absolute loss will adjust less to accommodate outliers.
  • wd or l1 which control regularization. Regularization helps prevent model overfitting. It works by preventing our estimates from becoming too finely tuned to the training data. This is why it is good to make sure your training data is an appropriate sample of the entire data set. In this case, we’ll leave these parameters as their default “auto”.

This part of the demo took a lot longer….

And it worked!

Ended at time: 7:21:54 pm.

 

The Forecast!

This is what we have all been waiting for!

For our example we’ll keep things simple and use Median Absolute Percent Error (MdAPE), but we’ll also compare it to a naive benchmark forecast (that week last year’s demand * that week last year / that week two year’s ago).

As we can see our MdAPE is substantially better than the naive. Additionally, we actually swing from a forecast that is too volatile to one that under-represents the noise in our data. However, the overall shape of the statistical forecast does appear to better represent the actual data.

Next, let’s generate a multi-step-ahead forecast. To do this, we’ll need to loop over invoking the endpoint one row at a time and make sure the lags in our model are updated appropriately.

 

Conclusion

It does appear that for pre-built scenarios that AWS’s Sagemaker worked for linear time series prediction!  While it doesn’t make you a master data scientist, it does however give you a simple place to train and practice with data sets.  If you wanted to master time series, you could simply plug in other datasets and conduct the same sort of analysis and cross check your work with other people’s results.  With Sagemaker, you have a complete and working blueprint!

Wrap up time: 8:19:30pm (with some distractions and breaks)

 

Cross-Platform Optical Peer to Peer Data Sharing in a DIL Environment

By Development, Mobile, XamarinOne Comment

A DIL (Disconnected Intermittent Low bandwidth) environment presents many problems. Sharing data between users is a significant issue. Most applications rely on a robust and constant connection to a network in order to collect and distribute data between users. But what happens when there is no network connection and it is critical for close proximity users to exchange data? What if they need to transfer data across mobile devices with different operating systems? There are a few possible solutions that can be developed to solve this problem using libraries available in the Xamarin stack.

Standard Options for Peer to Peer Data Sharing

Many mobile devices are able to send and receive data via bluetooth. It is possible to create a connection between devices on this bandwidth and transfer data. Bluetooth can work in many situations, but if the user is in an area with high interference in this frequency range, or if security is an issue and she/he does not want their signal discovered, bluetooth is not an option. Near-Field Communication (NFC) is another technology that can be leveraged to create a connection between devices. NFC data transfer is secure due to its extremely close signal range (approximately 10cm). Unfortunately at the time of this writing it is not available as a solution on iOS devices. Apple’s Core NFC API has provided developers access only to the read functionality of the NFC chip in their devices. Another issue with NFC is that due to its low signal strength it is susceptible to interference. If bluetooth and NFC do not work for your use-case, what other options are available?

 

Optical Peer-to-Peer Data Transfer

Another option to consider for peer to peer data transfer is optical. Data can be encoded into QR images and those images can be displayed on a device’s screen. A second device can simply use it’s camera to read the image and then it can decode the QR image back into data. There are limitations to the amount of data that can be transmitted in a single image. When encoding numeric data (0-9), up to 7089 characters can be stored in a single image. When using a limited alphanumeric (0-9A-Z $%*+-./:) 45 character set, up to 4296 characters can be encoded in a single image. You can also use an ISO 8859-1 character set to encode up to 2953 characters in a single image. If the amount of data you are attempting to transmit is greater than the capacity of a single image, one can implement custom solutions to break up the data into multiple images. The images can be read and the data recombined on the receiving device. This would also require custom implementation. Custom solutions that are developed to handle these data issues can be leveraged in both major mobile platforms if it is developed using Xamarin.

Optical Peer-to-Peer Data Transfer has some advantages when compared to other peer-to-peer data transfer methods. It is secure because intercepting the transmission requires a clear line of site to the image. The data can also be encrypted prior to generating the encoded image, ensuring that the underlying data cannot be read without the encryption key. When using this method of data transfer, there is no chance for interference to disrupt the communication. The images can be read in daylight or dark because of the back-lit screen on mobile devices. Additionally, there are no concerns about electromagnetic interference. Another benefit to this method of data transfer is that the transmission can be communicated across different mobile platforms. The QR Code is platform agnostic, so an Android device can transfer data to an iOS device and vice versa by using the Optical Peer-to-Peer Transfer method.

Conclusion

Optical Peer-to-Peer Data Transfer is a novel solution for data transmission across mobile devices in a DIL environment. NFC may not be 100% reliable in all conditions, and currently peer-to-peer sharing using NFC is not an option on iOS. Bluetooth transmission may not always be an option if the use-case requires that the user not be discoverable or there are other security concerns. Directly transmitting data that is encoded into QR codes from one device’s screen  to another mobile device through the device’s onboard camera is a simple and elegant solution to peer-to-peer data sharing when other methods fall short. The Xamarin development platform is the most efficient way to create and deploy a cross-platform mobile app with this data transfer functionality.

Home Health Care- How Alexa Can Help

By Development, Innovators, ToolsNo Comments

Home Health Challenges

Home health care has many challenges and opportunities. Over the next 30 years the number of seniors needing these services is expected to quadruple and it can be expected that there will be a low level of technical sophistication among that user base. Therefore, it makes sense to make all technical interaction as simple as possible. Hands-free communication is also a must, as it eases the process of interacting with a computer. Digital assistants such as Alexa and Siri are a recent innovation that excel at hands free communication with several applications within this space.

Voice Interaction

Today, this technology is becoming more and more prevalent in our day to day lives. Software like Apple’s Siri, Microsoft’s Cortana, Google Home, and Amazon’s Alexa have become deeply ingrained in these companies product lines. Voice Interaction is the way of the future and top tech companies are pushing it forward by making it one of the most accessible features available. For instance; Cortana is on the taskbar, and Siri and the Google assistant can be accessed with your voice or just the touch of a button.

Amazon’s Alexa

While Microsoft, Google, and Apple have enabled their Digital Assistants on a wide range of devices, Amazon has focused specifically on integration with “smart home” devices and their accompanying apps (called Skills by Amazon) and has become a clear leader in this segment of the field.

Summary

So what does this mean for home care providers? Staying in touch with the customers just got a whole lot easier. Being well-informed and keeping track of adherence to a daily schedule is also desirable, especially if live-in care is not an option. That’s why at ByteLion we’re developing Alexa applications for the assisted living space that allow providers to give more effective service to their clients, by enabling them to make more informed decisions, and improving the quality of life for customers and their families.

We’re working on apps that streamline these processes, by giving caregivers the ability to have insights into their customers’ habits like never before. With intelligent schedule reminders, home automation, and hands-free calling, the future looks bright.

Best Practices: Mobile Apps for Seniors

By MobileNo Comments

Introduction

Technology is more prominent in our lives today than it has ever been. We might not all have flying cars and jetpacks, but advances in mobile technology have made smartphones so accessible that over 36% of the world is connected through one. As with most technologies, older generations are typically the last to adopt, and this is also true regarding mobile devices.

One of the factors impacting the use of mobile technologies is that other advances are allowing people to live longer. Currently, 5% of the US population is over 65. By 2050 this number is expected to jump to 22%. Current seniors were 45 to 55 when smartphones first came out, so they have had some time to adapt to the technology. The proof of this is that in the past 5 years the number of seniors that own smartphones has doubled and is now at 42%.

Now, as more seniors adopt mobile technology and as the number of seniors continues to grow, the mobile industry has to adapt to fill the needs of those users. In order to fill their needs, it is imperative to focus on their user experience. However, before we can do that, we need to understand the older generation’s motivations, aspirations, and collective personality. What are their frustrations and what are their goals?  For most people in the tech industry, these are our parents and our grandparents. Consider your parents and grandparents. I am certain that at some point in time you have had to help them with some kind of technology issue. Why did they need or want your help? Could they have figured it out on their own?

Reflection

As you may have discerned, seniors are extremely habit driven. I remember a time when I was still in grade school and a new supermarket opened up down the street from my grandparents’ house. I thought this was great. It was new. It was only 5 minutes away as opposed to 20 minutes away. It was very nice inside and it had everything. I only knew this because my parents took me there.

To my grandparents, the new supermarket might as well have not existed. Both of my grandparents still drove the extra 15 minutes each way to go to their old store. Theoretically, they could have gone to the new store and it would not have taken very long for them to learn the ins and outs of the new store, which would have eventually saved them time and money. They did not care. They had no interest in the new store despite the benefits. It simply did not matter. They wanted to stay with what they knew and what they were comfortable with, regardless of the benefits provided by the new store.

Moving Forward

Why is it that seniors, such as my grandparents, are hesitant to change their patterns and habits even when this change may benefit them? They are not stupid – so why do their priorities align in a way that makes them overlook what seem like obvious benefits. It is a combination of pain points, goals, and established mental models.

Some pain points are obvious and simple to address. As we get older our sight, hearing, touch, and dexterity start to fade. Other pain points require more abstract thinking to address such as memory loss, and lack of energy, and the feelings that come from watching our faculties fade.

The goals of seniors, for the most part, are the same as the goals of a younger audience. They want to be healthy, social, travel, shop, have access to news and finance information, and participate in activities that make them happy. How they go about these is going to be quite different from younger generations. Let’s think back to our earlier story and ask why would the grandparents continue to use the old store despite the benefits offered by the new store?

  1. As senses fade we become uncertain of ourselves and find comfort and security in what we know.
  2. Having to learn new things can be frustrating and can take time.
  3. New experiences mean a lack of control – the person must adapt and learn.
  4. New experiences require being comfortable with unknowns and relying on your senses.

Taking all of this information into account, how do we move forward? What can we do to address the pain points and goals of our audience so that they can have a great experience? The guide below provides some basic rules that any software company can apply to provide a better experience for their senior users.

Best Practices, Rules, and Guidelines

Visual  

  1. Make everything larger. This includes text (minimum 16 pt font), icons, touchpoints, buttons, and any and all interactive elements. We also need an intuitive way for the user to adjust the size of the text content on their own.
  2. Ensure that all areas have high contrast and that there are no low contrast areas where the user might not be able to identify content.
  3. Be selective with gradients because they can lead to low contrast areas.
  4. Ensure that touch feedback can be seen clearly despite the user’s finger.
  5. Reduce the distance between sequential items such as form fields without making them so close the screen appears cluttered and leads to cognitive overload.

Hearing

  1. All audio content should have easily accessible volume controls.
  2. Audio content should provide captions.
  3. Interactive audio feedback can be provided as another way of letting the user know that they are progressing. Making positive vs negative sounds can be a clear progress indicator depending on the situation.

Touch

  1. Large touch targets are easier for seniors to touch. Even younger audiences get upset when touch targets are too small and they are unable to access the content they want.
  2. Haptic feedback on the downpress also provides an excellent way to let the user know that the software is recognizing their input. The more feedback provided, the better the user knows what is happening as they work their way through the app.

Interaction

  1. Navigation elements should be easy to find on all pages. The user should never not know where they are within the app.
  2. On mobile, the user should always have access to the main navigation.
  3. The search function should be available on primary navigation pages and should be forgiving of spelling errors and offer suggestions based on the app.
  4. Error Messages instead can be “Helper Messages”. Do not be negative and make sure that wording provides a clear message of what happened and what to do next.
  5. Loading icons should be present for any action that is not immediate. This way the user knows the app is working on that action.
  6. If content does not fit on a single page then the ability to scroll down the page should be made obvious and clear.
  7. Any element represented by an icon should also have tooltips to provide more information to the user in case the icon is not recognized.
  8. A tutorial or help function should be provided and be easily accessible.
  9. Interactive elements should be clearly distinct from non-interactive elements.
  10. Avoid pulldowns and dropdowns as they tend to rely on fine motor skills.

Basics

  1. Do not use technical jargon, use clear and simple language.
  2. Do not require downloads. Many seniors feel uncomfortable with downloads.
  3. Important information should be distinct and clear from basic information.
  4. Color use should be conservative and with purpose.
  5. Avoid animations as they can be distracting.
  6. Provide extra spacing between lines of content so it is easy to read.
  7. Do not overlay text on top of imagery.

We can see that many of these rules, guidelines, and suggestions align with good design principles but are slightly exaggerated beyond what would apply to your typical user base. As you move forward with your project, communicate with and observe your users to ensure the software is specific to their needs. And remember that even within the senior user base there will be subgroups and niche groups that have their own needs, desires and goals that need to be catered to in order to provide an optimal user experience.

By: Marc Hausle

Marc Hausle is a UX Designer and Consultant who has made an impact on 100’s of apps in the Google Play Store. Marc approaches projects with a combination of logic and high-energy creativity that generates engaging and effortless experiences for users.

Entity Extraction On a Website | AWS Comprehend

By AWS, Comprehend, Development, PythonNo Comments

Use Case

You want to better understand what entities are embedded in a company’s website so you can understand  what that company is focused on.  You can use a tool like this if you are prospecting, thinking about a partnership, etc.  How do you do this in the most efficient way?  There are some tools that have made this a lot easier.

1. Select Your Target

Here are the steps that we used for http://www.magicinc.org.  They are a simple squarespace site.  You can see this by checking out https://builtwith.com/magicinc.org

2. Get the data

For entity extraction, raw text is the goal. You want as much as you can get without having duplicates.  Here is how you can pull everything that you need.  Here are some command line arguments to run on a Mac.

  1. For the domain you want to search, change directories to a clean directory labeled YYYYMMDD_the_domain.
  2. Run this command: wget -p -k –recursive http://www.magicinc.org
  3. cd into the ./blog directory.
  4. Cat all of the blog articles out using this recursive command: find . -type f -exec cat {} >> ../catted_file “;”

3. Prep Query to an Entity Extraction Engine |  Comprehend

In this simple case, we are going to query a AWS’s Comprehend service.  We will need to write some simple Python3 code.

Since we can’t submit more than 5000 bytes, we need to submit a batched job that break’s up our raw text into simplified batch text.   To do that, I wrote some very simple code:


temp = open('./body_output/catted_file', 'r').read()
strings = temp.split(" ");
counter = 0;
aws_submission = "";
submission_counter = 0;
aws_queued_objects = [] for word in strings:
pre_add_submission = aws_submission
aws_submission = aws_submission + " " + word
if len(aws_submission.encode('utf-8')) >5000:
submission_counter = submission_counter+1
print ("Number = " + str(submission_counter) + " with a byte size of "+\n"+
"+ str(len(pre_add_submission.encode('utf-8'))))
aws_queued_objects.append(pre_add_submission)
aws_submission = ""

Now,  we have to submit the batched job.  This is very simple, assuming that you have your boto3 library properly installed and your AWS configs running correctly.

response = client.batch_detect_entities(
TextList=aws_queued_objects,LanguageCode='en')

Analyze

Now…. all you have to do is visualize the results.  Note, you need to visualize this result outside of the Comprehend tool because there is no way to import data into that viewer.  This snapshot is what it looks like.

More importantly, the key work is to analyze.  We will leave that up to you!

 

Source Code

It was made to be as simple as possible without over complicating things.

Github: https://github.com/Bytelion/aws_comprehend_batched_job

 

Secure DIL Environment Login with Xamarin.Auth SDK

By Development, Mobile, XamarinNo Comments

Previously in this blog series I have defined what a DIL environment is and I have described some of the key technical problems a DIL environment imposes on a mobile application that relies on web services for data and other functionality. Now it is time to begin looking at implementing specific solutions to some of these problems. In this article I will focus on solving the problem of how to implement a secure login for an application while in a DIL environment.

DIL User Login Authentication Sequence

In a normal connected environment a user (User A) enters their name and password into the mobile application. The mobile app then sends those credentials to the backend web service for verification. If valid, the user is logged into the application and gains access to its resources. But what happens when the device is disconnected from the network? How can User A’s credentials be validated? The mobile application must be designed to support DIL login. This can be accomplished by securely storing user credentials locally whenever a new user on a device successfully logs in to the application while the device and application is connected to its web service. Now that User A has already successfully logged in to the application on a specific device in a connected environment, User A can now login to the application on that same device when it enters a DIL environment.

What if another user (User B) also wants to login to the application on the same device in a DIL environment, except User B has not previously logged in on that device when it was connected. Unfortunately User B will be unable to login, even if she/he has valid credentials. It is impractical to store all valid user credentials for the application locally on a mobile device. The only way a user can login to the application on any given device is if they have previously logged in to the application while the device is connected. This sequence is illustrated in the chart below:

Xamarin.Auth SDK

What do we need in order to implement a DIL login? The first step is to find a way to securely store verified user credentials. The Xamarin stack includes an SDK that provides a simple and secure cross-platform solution for local user credential storage and user authentication. Xamarin.Auth also includes OAuth authenticators with built in support for identity providers including Google, Microsoft, Facebook, and Twitter. Additionally, Xamarin.Auth provides support for presenting the sign-in user interface. For more information on these features check out the official Xamarin developer documentation here. The aspect of Xamarin.Auth that we are going to focus on here is the secure local storage of user credentials.

Securely Store User Credentials

To make a DIL login possible a user must first have a successful login on the device while the device is connected. After the credentials provided by the user have been authenticated by the web service the verified credential data can then be passed to an Account object derived from the Xamarin.Auth SDK. The Account object can then be saved securely using the Xamarin.Auth AccountStore class. Below is an example of how this can be implemented.

The AccountStore class maps to Keychain services in iOS and KeyStore in Android. This makes it an excellent cross-platform solution for secure storage for verified user credentials that can be used to authenticate user logins when the device enters a DIL environment. The verified credentials stored locally through the AccountStore class can be retrieved and used to verify a DIL login as shown in a simple example below:

Once the user’s credentials are verified against previously authenticated credentials, the user can be allowed access to the application’s functionality and data. If the credentials cannot be verified against the locally stored credentials the user should be denied access.

Conclusion

In a DIL environment secure login is an issue that needs to be addressed. When developing applications using Xamarin, the Xamarin.Auth SDK contains an effective, efficient, and secure way to store verified user credentials across mobile platforms. That locally stored credential data can then be used to authenticate users that have previously logged in on a specific device when that device is offline. This gives users the ability to login and access application features at any time, regardless of network status.

Exploring DIL Environment Limitations and Solutions with Xamarin

By Development, Mobile, XamarinNo Comments

In my last installment I described what a DIL Environment is and how it can negatively impact a mobile application that relies on web services. When developing a mobile app there are a number of scenarios you may face regarding the amount of control and DIL support you have with the backend web services you are consuming:

  • Custom Purpose-Built Backend: A custom backend that your team designs, controls, and configures to support DIL scenarios specific to your application.
  • Platform As A Service (PAAS): A backend utilizing a platform like Azure, ApiOmat, or Amazon Web Services that may offer some built-in DIL support which can be implemented and configured.
  • Virtual Blackbox Backend: A back-end that you have absolutely no control over and provides no DIL support.

In this article I will be examining the third scenario, a virtual blackbox backend. In order to maintain functionality for the mobile application in a DIL environment in blackbox backend scenario all DIL support must be implemented client-side within the mobile application itself. This is a common situation when it comes to enterprise and and third party application development.

 

Offline Caching Support- Preserving core application functionality when disconnected

How can a mobile application like this remain functional when it is cut-off from its web service data stream?  The only way is to cache incoming data locally on the mobile device for offline use. The complexity and amount of data consumed by the application would be factors the developer should consider when determining the type of data store and its implementation. One the most common methods would be to leverage an SQLite database within the mobile application to store the incoming data. Xamarin offers excellent support for SQLite database implementation, but there is no framework for resolving an optimal caching strategy.

When creating a local store there is also the need for custom logic to be implemented to manage the data. The storage space on the mobile device is limited. It is unlikely all of the data supplied by the webservice can be cached on the mobile device indefinitely due to the device’s local storage space limitations. Some applications may work best with a FIFO (first-in, first-out) logic applied to the data caching process. This logic could be triggered either when the local data store reaches a predetermined total size on the mobile device’s local hard drive, or the remaining space on the device’s local drive drops below a critical level. At that point old data would be purged to create space for new incoming data. In some cases it might make sense to give users the option to override that logic on specific data objects that may be critical to the user’s purpose.

There must also be custom logic implemented to save and track changes the user makes to the data as they work offline. When the device finally reconnects to the network there needs to be logic to manage the update process. Depending on the amount of data kept in the local store there is a good chance that pushing the entire contents of the local store back to the web service for update would be impractical. Custom logic within the mobile application itself must be implemented to track changes so that only new data is pushed to the web service for update. This strategy would save time and bandwidth, both of which may be critical to the user.

There is no out of the box existing Xamarin framework to support this caching. Each developer must roll their own version and think through all of the use cases.

 

Security Issues- What is the nature of the data being stored?

Depending on the purpose of the application, especially medical data, it is a possibility that the data being cached on the local store is sensitive or private in nature. The presence of malware on the mobile device where your application is in use is a possibility that should be considered. It may be necessary to encrypt the data being cached by your application in order to prevent malware from mining data out of your application’s local data store. Xamarin does support data encryption, but developers typically have to download 3rd party tools to build effectively.

Another issue to consider is if your application requires secure user login. Typically user authentication is handled by backend web services, so how can a user login to the application when disconnected? A custom offline login process must be created and implemented within the application. Validated login information must also be stored locally on the user’s mobile device for offline authentication process’ utilization. Storing login information locally would require an encrypted local store in order to keep users’ credentials secure.

Additionally, when working in a blackbox backend scenario there is typically no way to obtain validated login information by request from the web service. Seemingly the best (and possibly only way) for the mobile application to capture validated login information is to store usernames and passwords when a successfully validated login to the webservice occurs. This solution is limited and would only store the login information of users that successfully log in on a specific device. Authorized users who have not previously logged in on that specific device would not be able to login on that device while it is disconnected.

 

Battery Life- Is this a critical issue for your user?

When a device becomes disconnected from the network it begins scanning for a new connection. This scanning can consume a lot of power and drain the device’s battery very quickly. Accelerated battery consumption is potentially a big problem for end-users if they are in a situation where there is no access to electricity. Within your mobile application it may be wise to implement logic that alerts the user when the device becomes disconnected from the network and then offers options to the user for the device’s network scanning behavior. It may be possible to override the device’s default network scanning behavior in order to give the user the ability to change the time interval between scan attempts or stop the device from scanning for a new connection altogether. All of these options would help to extend the battery life of the device. The user should also be given the option to re-enable the network scan when the user is back within range of the network. While the OS of mobile phones do implement battery saving techniques, Xamarin developers don’t have existing tools built into the framework.

 

Conclusion

In this installment I have taken a look at the challenges presented in attempting to compensate for the limitations of a DIL environment when there is no support for this scenario from the web-services your mobile application consumes. There are many aspects to the problem that must be considered, and the implementations of these solutions will largely depend upon the priorities and purpose of your application. When dealing with a virtual blackbox backend there is a lot of work put upon the application development team to find and create custom solutions and implement them within the mobile application itself.

Xamarin would be a great tool for this scenario when the application being developed is required to be deployed on both Android and iOS platforms. The majority of the custom logic required to implement the DIL solutions I described would be shared in both platform iterations. Sharing that amount of code would reduce development time and allow for faster and less costly deployment across multiple platforms. A DIL environment creates many challenges, but we here at Bytelion relish the opportunity to solve problems and create custom solutions for our clients.

Stay tuned for our next installment as we explore newly created tools and techniques to support DIL using Xamarin!

Why Do I Need A QA Engineer?

By Agile, Development, Innovators, Startup, TestingNo Comments

Introduction

Why is a Quality Assurance engineer necessary for development of software? Couldn’t I simply get my developers to QA/review their own work? Could I get get developers to review each other’s work? These are all questions that I have come across at some point or other from multiple people.

Before I answer, let’s briefly summarize what QA is:

What is QA?

QA is the analysis of functionality and overall appearance of your site / app. This can include (but is not limited to): Cross-browser testing, screen resolution compatibility testing, grammar, spelling, functionality.. the list goes on. QA is ideally approached from multiple angles.

When testing a simple ‘contact us’ form, the QA engineer would ensure that the email field ensures that a valid email address is entered, the name fields do not accept numbers, the name fields do not accept special characters, ensuring fields have limits so malicious users cannot overwhelm your system by entering large amounts of characters, etc.

QA Responsibilities

A QA engineer’s responsibility is to review each feature before it is released, suggest edits to issues and approve code before it reaches the product owner. Therefore, not only is the entire site under the QA engineer’s watchful eye, each part of the site is analysed during its creation.

Why is QA Necessary For Development?

As you can see above, the responsibilities for QA are laborious. A dedicated amount of time by someone who knows your system is needed. Not only is QA needed for each release, regular testing across your site is critical to catch issues that may affect it from external sources.
Example: Still running flash player on your site? Browsers are discontinuing support since it is considered deprecated technology. Your QA Engineer will (/would) know this.

 

Can Developers QA Their Own Work?

The QA engineer should be a consistent team member, part of daily scrums and involved in feature development. Developers however,are assigned a particular module of the whole system and aren’t truly aware of the system as a whole. Not only is development typically modular, a developer has a completely different mindset and thought process. He/she may not consider all the scenarios a tester would consider.
They can definitely code review their peers but QA is a different game entirely.

 

Want to find out more about software development practices? Check out our Blog!
Bytelion is a full service software development firm. Check out the rest of Bytelion.com or contact us to find out more.