Tuesday, December 15, 2015

Apps on the big screen part III: Debugging on an Android TV

How cool would it be if you can debug your TV app on a real device! For tvOS all it takes is a provision profile and an USB-C to USB-A cable to connect the Apple TV 4 device with your MacBook. How to enable the debugging option on a real Android TV is less obvious. It starts with selecting the right Android TV device.

Android 5.0 set-top boxes are hardly available in my part of the world and getting one from Ali Express is not an option, because the delivery times are too long and I do not have that much patience. So I decided to get myself a TV running on Android.


1. Pick the right Android TV

First of all, if you have plans to develop apps for Android TV and want to be able to debug them then it is important to decide which brand and model you pick. Previously I have made the mistake to chose a Philips TV, the 32PFK6500 to be exactly. It was the only 32 inch model that was available, which made it the perfect development TV, or so I thought.

It turned out to be not such a good idea. There is no way to debug and test your apps on this or other recent Philips Android TVs, basically because of Philips security policy. Yes, you can unlock the developers menu, but you will never be allowed to connect the ADB. This makes debugging impossible.

Too bad! It is a nice TV and I really enjoyed the Ambilight experience. This TV is probably great if you just want to watch TV and use some apps, but it is not really suitable for app development, although there seems to be a workaround to get your app at least tested through the Google Play alpha- and beta distribution mechanism.

Tip of the day: If you want to return your smart TV to the shop then do not forget to restore it to the factory settings. You may have entered your google account and others details that you do want to erase first. Unlike me, do this before you put everything back into the box ;)

So I went back to the shop and exchanged the Philips TV for a Sony Bravia TV, the 43W80xC one. With this television I have made a new attempt; this time with success!



2. Unlock the developers menu

Just as is the case with Android running on a smart phone, you have to unlock the developers menu first. To do so click the home button on the remote control, go to Settings, choose About and scroll down until you see the Build option. Click seven times on it to unlock the developers menu.


Before you continue: Yes. Here comes the disclaimer. I guess there is a good reason for Philips to prevent app debugging and to disallow apps from unknown sources. Use this tutorial at your own risk. My TV did not explode or anything like that, but I am not too sure about yours ;)



3. Enable ADB debugging

The Developers menu will appear under System Preferences. Choose this option and next choose Debugging. Here you can change the setting for ADB debugging to On.



4. Debugging over LAN

I have not found a way for USB debugging yet. It just does not seem to work, although there are multiple USB (2.0) ports available on the device. So lets do this slightly different. It seems that out of the box the Sony Android TV allows debugging by LAN connection directly. Get its IP address and get connected!

Note : Your MacBook (or PC) and the TV have to be on the same network to make the magic happen.

Click on the Home button on the remote control and under the Settings section choose Network settings. Next click on Wi-Fi or Wired LAN, depending on how your TV is connected. Under IP address you will see the TV's IP address.

Open a new terminal window and connect ADB with the IP address (using the default port 5555) you have just found.

adb connect 192.168.2.18:5555

If everything went well the result will be something like: connected to 192.168.2.18:5555. At this point the Philips TV earlier returned the message Connection refused.


5. Create an Android TV project in Android Studio

In Android Studio create a new project and choose TV as a platform. This will create a ready made media centre app for you, which you can modify if you want to.

6. Launch your app

If the ADB connect command did succeed in the previous step and you run your app in Android Studio the TV will be shown under Connected devices. Select it and click on the OK button.



The first time the Allow USB debugging dialog will popup. Choose Always allow and click on the OK button to continue.

Another dialog that may appear is the one that says Allow Google to regularly check device activity for security problems.... So far I have chosen to decline this but I guess it will not do too much harm if you choose Accept




Conclusion

As you can see it is not that difficult to debug your Android TV app once you have the right equipment and know how to configure things. Having everything up and running the next challenge is to create a real cool TV app.


Further reading

Monday, December 7, 2015

App of the rings: Neyya, Android SDK and BLE

My great friend Wim Wepster gave me this interesting Neyya ring. Fortunately it did not come with a proposal. Instead it was a great opportunity to explore Bluetooth and alternative wearables, such as this ring.

Neyya is a ring that can send gestures such as taps and swipes to a mobile or another device that is supporting BLE. You can use it for presentations or games, although you can just use your mouse, watch or phone for that as well. So I wondered how it works and what could be an interesting use case for it.

Neyya comes with an iOS and Android app but also with an Android SDK. I was having some troubles with it as it was not able to detect my Neyya ring at all. For the time being (as there must be a more elegant solution for it) I have fixed this by modifying the NeyyaBaseService class within the NeyyaAndroidSDK project.



I removed the check to determine whether the Bluetooth device is a Neyya ring or not from the onLeScan method.

 private BluetoothAdapter.LeScanCallback mLeScanCallback =
   new BluetoothAdapter.LeScanCallback() {
     @Override
     public void onLeScan(final BluetoothDevice device, int rssi,
       byte[] scanRecord) {
         String deviceAddress = 
           device.getAddress().substring(0, 13);

         // if (neyyaMacSeries.equals(deviceAddress)) {
              NeyyaDevice neyyaDevice = new NeyyaDevice(
               device.getName(), device.getAddress());
              if (!mNeyyaDevices.contains(neyyaDevice)) {
                   logd("Device found - " + device.getAddress()+ 
                    " Name - " + device.getName());
                   mNeyyaDevices.add(neyyaDevice);
                   broadcastDevices();
              }
         //}
     }
  };
I did the same thing for the isNeyyaDevice method, like this.
public boolean isNeyyaDevice(NeyyaDevice device) {
  String deviceAddress = 
    device.getAddress().substring(0, 13);

  /* if (!neyyaMacSeries.equals(deviceAddress)) {
       logd("Not a neyya device");
       broadcastError(ERROR_NOT_NEYYA);
       mCurrentStatus = STATE_DISCONNECTED;
       broadcastState();
       return false;
     }
   */
    return true;
  }

Okay, I have to be more careful which Bluetooth device I pick from the list of available devices but at least I am able to continue my journey. Let's connect to the device.


Great! but what is it that we are trying to solve here?

The supported gestures are taps, double and triple taps, swipe left, right, up and down. Now, what problem could this ring solve other than the things that come with the Neyya app already, such as capturing a picture, turning the volume of a device up and down and moving to the next or previous song?

One of the problems that the Neyya ring could help me with is something that most people will recognize. Your best ideas come up when you are not able to write them down immediately, for example while taking a shower or driving a car. In such cases it would be great to just rub your ring to create an audio note. That is easy to implement.

To create a prototype I have modified the ConnectActivity class a little. First in the onReceive method of the BroadCastReceiver implementation I do call a new method actOnGesture

 ...
 else if (MyService.BROADCAST_GESTURE.equals(action)) {
   int gesture = intent.getIntExtra(MyService.DATA_GESTURE, 0);
   showData(Gesture.parseGesture(gesture));

   actOnGesture(gesture);
 ...

That method goes like this

    private String actOnGesture(int gesture){
        Gesture.parseGesture(gesture);
        switch (gesture) {
            case Gesture.SWIPE_UP:
                stopRecorder();
                return "SWIPE_UP";
            case Gesture.DOUBLE_TAP:
                startRecorder();
                return "DOUBLE_TAP";
        }
        return "";
    }

To start recording we will create a MediaRecorder instance and create an output file for it in the m4a format. in the stopRecorder method we will stop the recording.

   private static String mFileName = null;
   private MediaRecorder mRecorder = null;

   private void startRecorder() {
        if (mRecorder!=null){
            return;
        } 
        String timeStamp = "/" + System.currentTimeMillis() +
          ".m4a";
        mFileName = Environment.getExternalStorageDirectory().
          getAbsolutePath();
        mFileName += timeStamp;
        mRecorder = new MediaRecorder();
        mRecorder.setAudioSource(MediaRecorder.AudioSource.MIC);
        mRecorder.setOutputFormat(
         MediaRecorder.OutputFormat.MPEG_4);
        mRecorder.setOutputFile(mFileName);
        mRecorder.setAudioEncoder(.
         MediaRecorder.AudioEncoder.AAC);
        mRecorder.setAudioSamplingRate(16000);
        mRecorder.setAudioChannels(1);
        mRecorder.setAudioEncodingBitRate(16);

        try {
          mRecorder.prepare();
        } 
        catch (IOException e) {           
        }
        mRecorder.start();
    }

    private void stopRecorder() {
        if (mRecorder==null){
            return;
        }  
        mRecorder.stop();
        mRecorder.release();
        mRecorder = null;
    }

Do not forget to add the right permissions to the AndroidManifest.xml file before you test the app.

  <uses-permission 
    android:name="android.permission.BLUETOOTH" />
  <uses-permission 
    android:name="android.permission.BLUETOOTH_ADMIN" />
  <uses-permission 
    android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>
  <uses-permission 
    android:name="android.permission.RECORD_AUDIO"/>

As soon as the app is running and the ring is connected, double tap on it to start recording and swipe up to stop the recording. You can use an app, such as the Astro app, to locate and play the audio file that you have recorded.


Conclusion

Later I will release the code of this POC on GitHub. I need to work out the concept a little bit more. Anyway, I am not fully convinced yet whether the Neyya ring is a useful wearable device or not, but by examing the code I have learned something about Android and Bluetooth and I do have a memo recorder that I can use just anywhere now.

Do not forget to take your phone with you as well, wherever you go. Wait for my first shower-voice-memos to arrive ;) Of course you can use the ring plus app as a spy tool if you want or find other purposes for it. My precious...

Further reading

Monday, November 23, 2015

Android Studio 2: Much faster and enhanced testing support

Android 2.0 comes with some great new features. Building and deploying apps will become much faster. The new Instant Run feature for example allows you to quickly see the changes you have made.

The new emulator will run much faster and it will come with enhanced testing support. The emulator will support Google Play Services and phone calls, low battery conditions and GPS locations can be simulated. It will support dragging and dropping APK files, just like Genymotion does.

More speed, that is what Android developers need. And with Android Studio 2.0, which is current available as a preview in the canary channel, speed is what we get.



Enable Instant Run

With Instant Run you build and deploy an app to an emulated or to a physical device just once and then as code needs to be changed, it will only take a few seconds before you can see the changes in the running app.

To see the new stuff for yourself you can grab an Android Studio 2.0 copy from the canary channel and enable the Instant Run feature for your existing apps.

From the Android Studio menu choose Preferences (Android Studio for OSX). In the Preferences dialog expand the Build, execution and deployment option and choose Instant run. You probably need to click on the Update project link to enable this new feature. In my case I also had to update the Build tools and to resync the project.


Once you have done that you are good to go. Run and deploy on the app using the Run button.

While your app is running you can modify your code. For example change the text of a toast being displayed in your app. As a small demo I have modified one of the recipes from my book but you can try this with any app of course.

Now you just hit the Run button again. A toast will be displayed to notify you about the changes. Indeed, we do no longer need to restart the activity to see our changes.



Note! Instant Run is a great feature but it is not (yet) supporting all kinds of changes. Some of these limitations are known, such as changing annotations, static fields or methods. Other kind of changes, such as modifications in the layout, should be supported I guess but I was not able to make it work.

It might because the project that I am using for testing this is having multiple flavors? Or it could be because this is just a preview of Android Studio 2.0 and maybe I need to be a little bit more patient and wait for a more stable release.

Conclusion

Android Studio 2.0 is focused on speed and better testing support. I think that is exactly what Android app developers deserve after struggling so many times with speed (in particular with Eclipse in the old less good days) and with the many fragmentation challenges we still have today.

Just like the Android OS itself Android Studio also has become mature and that is great news!

Further reading

Sunday, November 15, 2015

Apps on the big screen part II; tvOS app development for real

Earlier I wrote about apps on the big screen, apps for Apple TV and for Android TV. Earlier I have been so lucky to receive one of the Apple TV developers kits so I had some time to play with it. Is it the larger screen or are the many new options that could become available with it causing all the fun? Anyway, I am having a great time creating apps for the new platform.

I had to buy an USB-C to USB-A cable to connect the Apple TV to my Mac book, because Apple informed me that using iTunes on a Mac is the only way to update the Apple TV. That cable did not come with my developers kit so it took some time to get one, as an USB-C cable is not yet in my otherwise large cable collection.

Hello TV!

So I connected the two devices only to find out that iTunes was not able to connect to the server. So, no update yet. But it was fun to find out that I was able to deploy directly from xCode to the TV.

It is not really different from running an app on your iPhone but I was surprised because I thought I had to use the TestFlight app for that and so far I had seen my early creations on the simulator only.



Get up to speed

There is no need to dive into TVML right away. You can use existing Objective-C or Swift code or code snippets and get up to speed with tvOS real quickly. The downside is that currently not all frameworks that exists for iOS are available for tvOS (yet). For some of them that makes sense as Apple TV development comes with some known limitations.

One of them is the lack of any option for local persistency. All application data needs to be stored in the (i)cloud. What makes less sense is that, for example, the Event Kit framework is missing for tvOS. I am sure there will be a workaround for this but these kind of things do not make the life of a tvOS developer very easy.

Parse & tvOS

Using SDKs from third parties in your tvOS project can be an issue as well. For my app I want to store data in the cloud using Parse. I cannot use iCloud because I want to share the app with Apple TV and with Android TV users.

Unfortunately Parse cannot be used, basically because the SDK supports caching and other local persistancy features. It is not going to work and you have to wait until the guys at Parse release a tvOS specific version of their SDK or you can try to use the Parse Rest API and do some things yourself.

Conclusion

Since I am a huge fan of Parse I really miss their SDK for tvOS. On GitHub you can find a Parse REST API approach for tvOS. It is written in Swift and although minimalistic it looks like an interesting start to make my tv app work with Parse anyway.

I still have not totally grokked tvOS yet and how to work around existing limitations but I am doing some great research on the topic. In the mean time you can check out some interesting resources on the topic. There are some nice examples available here. Or check this site or this one.

Or watch this video or this one.

Further reading

Sunday, November 1, 2015

The Android Studio cook book has arrived.

Since a few days my first book, about Android Studio and Android application development, has become available at Packt Publishing and Amazon. I had fun writing it and I wonder if you think it is fun to read it.

Android studio is the number one IDE for developing Android apps and it is available for free to anyone who wants to develop professional Android apps. Any type of Android app can be developed using Android Studio.

Think of apps for phones, phablets, tablets, TVs, cars and for glasses and other wearables such as watches. Or consider an app that uses a cloud base backend like Parse or App engine, a watch face app or even a complete media centre solution for TV.

So, what is in the book?

This book will help you to make the right choices while developing your apps. For example on smaller screens provide smart navigation and use fragments to make it look great on a tablet too.

Or see how content providers can help you to manage and to persist data and how to share data amongst applications. The observer pattern that comes with them will save you a lot of time.



  • The book will also elaborate on Material design. Create cool apps using CardView and RecyclerView widgets for example. Or find out how to create special effects and how to create great transitions.

  • Another chapter is dedicated to the investigation of the Camera2 API and how to capture and preview photos. In addition you will learn how to apply filters and how or share the results on Facebook.

  • You will learn about patterns and how support annotations can help you to improve the quality of your code. Testing your app is just as important as developing one and it will take your app to the next level. Aim for a five star rating in the Google Play Store later.

  • The book shows you how to do unit testing, based on jUnit or Robolectric and how to use code analysis tools such as Android Lint.

  • You will learn about memory optimization using the Android Device Monitor. Detect issues and learn how to fix them.

  • Having a physical Android device to test your apps is strongly recommended but with thousands of Android devices being available, testing on real devices could be pretty expensive. Genymotion is a real fast and easy-to-use emulator and comes with many real world device configurations.

  • Did all your unit tests succeed? There are no more OutOfMemoryExceptions any more? No memory leaks found? Then it is about time to distribute your app to your beta testers. The final chapter explain how to configure your app for a beta release by creating the build types and build flavours that you need.

  • Finally distribute your app to your beta testers using Google Play to learn from their feedback.

    Conclusion

    These and other topics can be found in this cookbook. Since I am the author of the book there will be no conclusion here. That would be bit weird, wouldn't it? ;-)

    Instead I would like to ask what you think of it...

    Further reading

  • Apps on the big screen; Apple TV 4 & Android TV

    We have apps on our phones, phablets, tablets and on our watches. And since recently we do have apps on TV. This year Android TV has become available and of course Apple has released its Apple TV 4, and with that, tvOS. This creates interesting new opportunities for developers (and publishers).

    Both Apple and Google are aiming at streaming movies and TV series and at casual gaming but you can also think of other type of apps. So called second screen apps, for example, could be integrated into a television show directly.

    On the other hand apps for TV can not be compared with apps on your phone as the interaction is completely different on television. TV apps are much more about consuming content and information and are less focused on interaction. Given this fact it will be difficult to predict exactly what TV apps are going to bring us in the future, other than movies, music and TV-series.


    Apple TV

    The fourth generation of Apple TV finally comes with an app store and the possibility for third parties to develop apps for it. Now Siri on TV seems to be the most impressive part of the new Apple TV, but after a while, when more apps become available, I expect these new apps to be the most interesting part of it.

    The new Apple TV comes with a nice remote control. It has a touch surface and allows the user to perform gestures such as swipes, taps and clicks. A gyroscope and a motion sensor are great additions to support game play in particular. In your code you can use gesture recognizers to detect swipes and taps and there a couple of ways to detect when various buttons on the control are selected.

    The remote communicates using Bluetooth and it has a microphone on board, so you can actually ask Siri about your favorite movie or TV-serie. Unfortunately Siri is not (yet) available in most countries.



    tvOS

    In order to develop for tvOS you will need to have Xcode 7.1. It comes with support for tvOS and a tvOS Simulator. For your TV app there are basically two approaches that you can think of.

    You can create a TV app using Swift or Objective-C. Of course you can also port your existing iOS app or game. UIKit, Core graphics and Spritekit, to name just a few, are all supported. Check out this this list for more information. If you are an iOS developer already you can create a tvOS app in no time.



    TVML

    Another approach is TVML. It is a new process of making apps with TVML, TVJS and TVML kit.

    TVML stands for Television Markup Language and is a form of XML. TVJS is set of JavaScript APIs, which provide you with the means to display apps created with TVML. Finally there is TVMLKit, which is glue between TVML, JavaScript, and a native tvOS application.

    There are many, probably familiar looking templates to be found here.


    Use xCode 7.1 to create a new tvOS app...

    Testflight has been updated to support tvOS app deployment. The Apple TV comes with the Testflight app pre installed.



    Before you start it is important to know that tvOS has some limitations, although they are not really as bad as they might look at a first glance. The app size cannot exceed 200Mb and local persistency is not possible at all. To persist data you need to use iCloud, CloudKit, your own backend-service or a mBaaS, like Parse or FireBase.

    If you want to share events or data with other users, CloudKit might be a good option. If you want to create a cross-platform app you should consider using a mBaas or creating your own backend.


    Android TV

    Great, but what about Android TV apps? Android TV runs on Android 5.0 (Lollipop) and above and it offers more or less the same user experience as the Apple TV is doing. Media streaming apps and games are available in the Google Play Store as well.

    Android TV can be built into TV's or into stand-alone media players. Hardware manufacturers, such as Sony, Sharp and Philips have or will release TVs, supporting Android TV, in 2015. Their Android TVs comes with features such as voice search and of course the Google Play Store. Android TV does also support ChromeCast, allowing a phone to select and to control media playback on the TV.

    Creating an Android TV is as easy as starting Android Studio,creating a new project and selecting TV as project type. If you are an Android developer it is not very different from a common smart phone app. The main difference, as is the case for any app on TV, is the interaction. For example, the TV is not a touch screen. Besides it probably will be too far away too touch anyway.


    I will elaborate on Android TV app development later. For now I am in particular interested in what the differences between Apple TV and Android TV are.


    Conclusion

    I think apps on TV arrived way too late as a lot of people do already no longer just watch what is being broadcasted at a particular moment in time. Instead they watch the things they want to see at the time that is most suitable for them.

    Apps on TV can contribute to this new way of watching TV and it could help to integrate second screen apps (voting apps and so on).

    When you compare Android TV with Apple TV you see there is not really much difference in user experience. Since Apple TV is being produced by Apple only I expect Apple TV to run more smoothly, but I must admit I have not seen Android TV in reality yet so that is just an assumption, based on experiences with Android and iOS on smartphones. And yeah, both iTunes and Google play offer movies, music, games and series. So, what actually is the difference?

    Apple TV comes as a set top box solution only and despite some rumours an Apple TV TV is not going to appear soon. Android TV will be integrated in both set top boxes and TV's. I expect this will result in more people using Android TV.

    Altogether interesting stuff to examine. Let's think of some great apps for TV... In the end, the app eco system determines which platform will succeed.

    Further reading

    Sunday, October 25, 2015

    Your app can open doors: Host Card Emulation

    Your app can open doors or make payments or do other cool stuff. Host Card Emulation (HCE) is a technology that emulates a payment or access card on a mobile device using an app. Many Android devices that offer Near Field Communication (NFC) functionality already support NFC card emulation.

    A card could be emulated by a separate chip in the device, called a Secure Element (SE). Android 4.4 introduces an additional method of card emulation that does not necessarily involve a Secure Element, called host based card emulation. This allows any Android application to emulate a card and talk directly to the NFC reader.

    Using Host Card Emulation you can, for example, pay, travel or check into a hotel, but you need an phone running on Android 4.4 or above.

    Any app on an Android 4.4 (or above) device can emulate an NFC smart card, letting users tap to initiate transactions with an app of their choice. Apps can also act as readers for HCE cards and other NFC-based transactions.



    Playing with NFC tags is fun, but HCE is even more fun...

    Get started

    To get started with HCE for Android you can download the card emulator and the card reader examples from GitHub (Or you can use the New, Import sample option from the File menu within Android Studio -for OSX- for that). I have installed the emulator app on an Samsung Note 3. On a second device, a Nexus 5 to be precise, I have installed the card reader app.

    After enabling NFC on both devices I am able to read the 'loyality card' that exists on the Note 3 device with the reader app on the Nexus 5 using NFC. That does work great, however I have not been able to do the same with a Samsung Galaxy S4 Mini (emulator app) and the Nexus 5 (reader app). For some reason, that I have not figured out yet, the examples do not work with the combination of these (and perhaps a few other) devices.

    For a Windows 10 example you might want to check this link.

    Secure element

    A Secure Element (SE) securely stores card data and does some cryptographic processing.

    During a payment transaction it emulates a card to authorize a transaction. The Secure Element could either be embedded in the phone or embedded in the SIM card.


    Pay


    Google wallet & Android Pay

    Android Pay or Google wallet do not (yet) use a device based Secure Element. It uses Host card emulation (HCE) instead. The card emulation and the Secure Element are separated into different areas. It can also be implemented with a Secure Element chip, but this is not yet required.

    Android Pay works for KitKat users and higher (Android 4.4+)

    Samsung Pay

    Samsung Pay works only for a small number of Samsung devices. It is built into Samsung’s Galaxy S6, S6 Edge, Note 5, and S6 Edge+. Samsung Pay uses (embedded) secure elements (and it supports HCE).

    Since it uses both NFC and magnetic secure transmission (MST) to transmit payment information the service probably will work on more POS as is the case for Android Pay.

    Apple Pay

    Apple Pay uses a device based Secure Element and does not use HCE. It uses the Secure Element chip on the iPhone as part of the token-generation process and also to store fingerprint data.

    I have not seen Apple Pay in action yet. I guess the most nearby opportunity will be in London, where you can use Apple Pay to travel by bus, tram, railway or underground.

    Apple Pay works on iOS 8 and above. The functionality is available on the iPhone 6 and iPhone 6+ only.

    Windows 10 & NFC HCE

    Microsoft supports HCE NFC payments in Windows 10 (mobile). You can check out the Windows 10 HCE Tap to Pay demo on YouTube.


    Conclusion

    HCE is interesting technology and available for Android & Windows. Apple is doing things differently and, as it seems to be, in a better and a more secure way.

    Since both Android and iOS do have their own audience, Android Pay and Apple Pay could perfectly exist next to each other. And then there is Samsung Pay, which probably will become a huge competitor for Android Pay.

    And what about us, developers? Well, we have some new challenges and things to figure out...

    Further reading

    Sunday, October 18, 2015

    ADB Plugin for Android Studio

    Android Studio comes with many interesting plugins. Under the Plugins section of the Preferences dialog in Android Studio (for OSX) you can find some of them and you can browse for additional ones using the Browse repositories button.

    On the Android Studio Plugin Repository page (or on GitHub) you will find all available plugins. Recently I have found a nice plugin for Android Studio (and Intellij IDEA) that could help you to speed up your daily Android development.

    ADB Not Responding - Wait More or Kill adb or Restart

    Does the line above looks familiar? To restart the ADB I always used the command prompt for that. Using this plugin created by Philippe Breault we could perform this and other tasks from within Android Studio.

    In addition to restarting the ADB it can also clear the app data, uninstall an app or just kill it. After installing ADB Idea appears under the Tools, Android menu but as Philip suggests on GitHub you can use the Ctrl + Shift + A (For Windows Ctrl + Alt + Shift + A ) short cut.


    Short cut for all

    By the way, you can also use the Cmd + Shift + A short cut and type adb to look up adb commands or use this short cut to look up any command.


    Conclusion

    Well, what else can I say about it? It is a nice little tool. Thank you Philippe Breault for that !

    Further reading...

    Tuesday, October 13, 2015

    Android & Retrofit 2.0

    Retrofit is a well known and very popular networking library. It is an open source library created by Square and it is widely used by Android developers because it performs great and is easy to implement.

    Retrofit 2.0 introducing new features, such as cancelling a request or using a pattern that fits both synchrone and asynchrone requests. Also it does no longer depend on just Gson.

    Now you need to add a converter as a separate dependency, which gives you more flexibility if you want to convert a response into an object. You can use the Gson converter but you can also choose another converter to support a different data format, such as Xml. Or you can create your own converter if you want to.



    Retrofit dependencies

    Here is a small demo project that consumes data from Wipmania using RetroFit 2.0. WipMania is a service that determines the location based on your IP address.

    1. Create a new project in Android Studio and add the dependencies for RetroFit in the build.gradle file in the app folder. The dependency section will look like shown below.

    compile 'com.squareup.retrofit:retrofit:2.0.0-beta1'  
    compile 'com.squareup.okhttp:okhttp:2.4.0'  
    compile 'com.squareup.retrofit:converter-gson:2.0.0-beta1'
    

    2. Define the API endpoint in an interface called IRepository

    import retrofit.Call;
    import retrofit.http.GET;
    
    public interface IRepository {
    
        @GET("json")
        Call demoCall();
    }
    

    3. Create a new class, name it Result and add some of the fields being returned by the WipMania service.

    public class Result {
    
            public String latitude;
            public String longitude;
            public Address address;
    
            public class Address{
                public String continent;
                public String country;
            }
    }
    

    4. Create a method testRetrofit in your MainActivity. Within this method setup Retrofit and call the method from the interface we have created. The API returns data in a JSON format. The GsonConverterFactory will convert that data into a Result object.

    If the call succeeds we will display some of the properties of the returned data in a toast or if it fails we will display an error.

    import retrofit.Call;
    import retrofit.Callback;
    import retrofit.GsonConverterFactory;
    import retrofit.Response;
    import retrofit.Retrofit;
    
    ...
    
    public static final String BASE_URL = "http://api.wipmania.com/";
    
    private void testRetroFit(){
    
       Retrofit retrofit = new Retrofit.Builder().
            baseUrl(BASE_URL).
            addConverterFactory(GsonConverterFactory.create()).
             build();
    
       IRepository repository = retrofit.create(IRepository.class);
    
       Call call = repository.demoCall();
       call.enqueue(new Callback() {
           @Override
           public void onResponse(Response response, 
             Retrofit retrofit) {
              displaySuccess(response.body());
           }
    
           @Override
           public void onFailure(Throwable t) {
              displayError();
           }
       });
    }
    
    private void displaySuccess(Result result){
        Toast.makeText(this,"lat="+result.latitude+
          " lng="+result.longitude+ 
           " continent="+result.address.continent+"   
            country="+result.address.country, 
             Toast.LENGTH_LONG).show();
    }
    
    private void displayError(){
        Toast.makeText(this,"something went wrong", 
            Toast.LENGTH_LONG).show();
    }
    
    5. Call the testRetroFit method from somewhere in your app, for example in the onStart method. Do not forget to add the internet permission to your AndroidManifest file. Finally run your app.
    <uses-permission android:name="android.permission.INTERNET"/>

    Cancel a transaction

    6. What if the user initiates a request but changes his mind? Previously Retrofit had not a straight way to cancel an ongoing transaction but that is no longer an issue. Just call the cancel method on the Call object.
    call.cancel()
    

    Conclusion

    So far it seems Retrofit 2.0 is a really great improvement and I have just seen the beta version. Also there some other interesting features to investigate...

    Further reading...

    Thursday, October 8, 2015

    Mobile Behaviour Driven Development: Cucumber and Calabash

    Before you will release your app to the Play Store or App Store it needs to be tested. And while we can do that manually it would be better to do that in an automated way.

    Android Studio support unit testing and so does xCode. However we also have to run UI tests. Google has released the Espresso framework for this purpose. And there is Xcode 7, which also has introduced UI testing features.

    But what is a smart thing to do if we want to test the same functionality for both Android and iOS? Cucumber could be the answer to that question.



    Cucumber & Calabash

    Cucumber is a tool that runs automated acceptance tests written in a Behaviour-Driven Development style. It describes how software should behave in plain text.

    Features are being described in a feature file, like in the example shown here.

     Scenario: Login
      Given I am on the Login Screen
      Then I touch the "Email" input field
      Then I use the keyboard and type "hello@world.nl"
      Then I touch the "Password" input field
      Then I use the keyboard and type "verysecretpassword"
      Then I touch "LOG IN"
      Then I should see "Hello world"
    

    This is not just documentation. It can be used for automated UI tests as well.

    Features are being described in a domain specific language. They describe the behaviour of software without much implementation details. So these tests can also be written by the non developing members in your team.

    Calabash & some glue

    Calabash is a framework that you can use for running Cucumber tests on Android and iOS apps. The framework has been created by the guys from Xamarin, which makes sense I guess, since Xamarin is a cross mobile platform solution.

    There is some glue code required to actually run Cucumber tests using feature files. This glue needs to be defined in step definitions and Cucumber typically lets you write these step definitions in the Ruby language. The glue is platform specific but the feature file is not so theoretically it can be used for both the Android and the iOS version of your app.

    Another cool thing about Calabash is that it allows you to run automated tests in the cloud, for example by using the services of TestDroid or the Xamarin Test Cloud.

    Conclusion

    Seeing Cucumber tests in action is great but it may take a while before you have setup all the things you need for it.

    Behaviour Driven Development sure is an interesting approach. Using Cucumber and Calabash we can create mobile UI and acceptance tests for both Android and iOS.

    The downside is that in particular Calabash does not seem to be very mature yet. The tests run slow and it takes a considerable amount of time to define them. Also the mixture of Gherkin feature files, Ruby glue and Java/Objective-C/Swift code feels weird.

    On the other hand the Calabash framework is still in beta so it probably is just a matter of time before mobile behaviour driven development becomes widely adopted.

    Further reading...