Week 1: Personal AI Assistant

This article is part of a series: File, New: Personal AI Assistant

One week down and thankfully, the project is on track! This week’s sprint objective was to ‘Build a simple console-based chat bot, powered by a mildly-trained AI engine’, and so far, so good:

Screen Shot 2017-06-20 at 22.32.21

The biggest challenge I faced this week was learning how to train my Wit.ai bot engine. Unfortunately, most of the tutorials and demonstrations across the web are outdated, and the current Wit tools and concepts have changed significantly. There’s some good documentation on the official website, but it’s mostly geared towards utilising the API and not very descriptive when it comes to explaining to a complete beginner how to train your bot.

It therefore took a lot of trial and error, which fortunately was made easier by studying the source code of this library. I felt that for my purposes, I would want to customise how this library works, and so I embedded a copy of the .csproj in my project solution.

Training the AI engine

I needed a simple task that I could get my application to perform, so I decided that the first thing I would teach it to do was recognise a request for the current time. I began here, inputting a variety of sentences that contained the word ‘time’, and applying the ‘time’ keyword to my custom time entity:

Screen Shot 2017-06-20 at 22.48.09

After inputting as many variations of the same question as possible (“tell me the time”, “what time is it?’ etc.), I moved onto the Stories tab. Here you can create your ‘stories’, which are example interactions that would happen between a user and the bot. I created my ‘time’ story as such:

Screen Shot 2017-06-20 at 22.55.32.png

To explain this in a little more detail,

  • The bot recognises the ‘time’ keyword in the user’s request
  • The bot then returns with an action to perform, in this case “time” (which is predefined in my code)
  • My code returns a string value with the system’s current DateTime as a context object defined with a “currentTime” key
  • The bot returns a constructed sentence with the user’s DateTime value included.

Artificial Dumbness

The above works great with my code, which I’ll dive into below, but the problem I’m facing now is that since the time keyword is the only thing that I’ve taught my bot to understand, it doesn’t understand anything else, which leads it to make a wild guess – and the guess is almost always that I’m making another request for the time (with a ‘confidence’ factor of almost zero). So, my next task is going to be to teach the engine some other features, so that I can be sure that my AI will be able to differentiate between a request for the time and, for example, a request for the weather forecast. These are some of the things I’ll aim to teach it:

  • Weather forecasts
  • News headlines
  • Travel updates
  • Latest tweets / composing and sending tweets
  • Telling the time in different timezones
  • Movie listings
  • Setting a timer / alarm

I have more ideas in mind, but with each feature there will be the task of building the functionality for accessing various APIs and data sources, so as I add more features the amount of time it’ll take to achieve increases exponentially. I’m hoping that this handful of features will be enough to smarten up my AI, and won’t put too much stress on my timescale.

The source code

As promised, I’m keeping all of my work open source and freely available to anyone who wants to make use of it, under the MIT license. The repo is here.

To create my console-based concept, I created a PCL project which holds my App class, and all of the services I’ll be making use of. In that project I have referenced the Wit.NET project that I’ve mentioned above, allowing me to create a WitService class to manage the conversations. Here’s my WitClient:

public class WitService : IWitService
    Wit _witClient;
    WitActions _actions;

    public WitService()
        _actions = new WitActions();
        _actions["send"] = Send;
        _actions["time"] = TellTheTime;

        _witClient = new Wit("", _actions)
            WIT_API_VERSION = "20170307"

    public void BeginInteraction()

    WitContext Send(ConverseRequest request, ConverseResponse response)

        return request.Context;

    WitContext TellTheTime(ConverseRequest request, ConverseResponse response)
        request.Context.Add("currentTime", DateTime.Now.ToShortTimeString());

        return request.Context;

As you can see, there are 2 actions set up so far: the send action, which is required by default, and a TellTheTime action, which sends the current time string as a context entity to the AI.

The BeginInteraction() function calls the Interactive() property of the Wit client, which begins an interaction with the AI bot.

On to Week 2

The objective I’ve set for this week is Train the engine to understand commands that I want my assistant to act upon, and with the above list of features I hope to train into my AI, I’ve got my work cut out for me this week. The main challenges will be training for interactions which require multiple inputs from the user, for example:

user > “Compose a new tweet”

AI > What would you like your tweet to say?

user > “This project is incredibly challenging!”

AI > Awesome. Tweet sent!


If you want to know how I get on, follow the blog!

File, New: Personal AI Assistant

With the launch of my latest iOS app out of the way, I’ve decided on an ambitious new side-project. Since I’m often guilty of starting things and then never seeing them through, I figured that launching this sub-blog might give me the motivation I need to stay focused.

The Elevator Pitch

A voice-activated AI assistant, with a visual avatar, built using C# running with the mono runtime on a Raspberry Pi.

Okay, so this will require a lot of technology that I’m not much familiar with. To break it down, I’m going to need to master at least the following:

  • Building console-first .NET executables with GUIs
  • Audio I/O hardware for a Raspberry Pi
  • Simple(ish) graphics and animation
  • Implementing local speech recognition (for voice activation)
  • Training an AI engine

I’ve put a lot on my plate.


Loosely borrowing from agile principles I’m planning on implementing this using small, achievable increments, or sprints. The way I envisage these are as follows:

Week 1:

Build a simple console-based chat bot, powered by a mildly-trained AI engine.

Week 2:

Train the engine to understand commands that I want my assistant to act upon.

Week 3:

Implement always-on speech recognition, to activate the chat features when a certain phrase is detected (think “Hey, Siri”).

Week 4:

Achieve a complete audio ‘conversation’ with my AI

Week 5:

Implement a simple graphical avatar (the ‘face’ of my AI) which reacts to commands with facial expressions

Week 6:

Deploy the project to a Raspberry Pi, and enjoy!

I hope that by setting these goals weekly will keep me focused on achieving this project in just 6 weeks. It may not happen that way, but that’s what I’m shooting for!

Chosen Technologies

For the AI engine, I have decided on using the power of Wit.ai. Wit is a free bot engine which powers thousands of chat bots, and has good documentation and a large community of developers, all of which I expect to depend on throughout this project. I’m excited to get stuck in to training my AI in preparation for running my own virtual assistant.

As mentioned within my brief, I am attempting this using C#, which will run on my Raspberry Pi running Ubuntu Mate. The executable will run using the mono runtime. I’m aware that there are other languages which might be much better for the task, but I have chosen this approach for several reasons: Firstly, I am primarily a .NET developer by trade, working with Xamarin and ASP.NET, so I am most familiar within this realm. Also, assuming all goes well, I’ll be interested to know what other platforms I might be able to run this on, and with the versatility and range that C# provides thanks to mono, I won’t just be limited to a desktop application.

My decision for running this on the Pi is just a personal endeavour – I bought the Pi with no idea what I was going to use it for, and since this project holds the potential for home automation, what better IoT tool than a Raspberry Pi?

I don’t have time to reinvent the wheel, so I will be experimenting with some open source libraries. So far I’ve come across this great post on CodeProject which seems to provide an answer on how I could implement voice activation. I haven’t tried it out yet, but hopefully it’ll be a great springboard. I’ve also found several 3rd-party libraries for the Wit.ai API, and having looked at each of them I think I’ll be experimenting with this one, because it seems to work out-of-the-box with just a couple of lines of simple code.


Project H.E.N.R.Y

This project, and the AI, will need a name. Henceforth, I name thee HENRY. Not just because (like Siri or Cortana) it’s an easily distinguishable word for voice-activation, but also because it’s the name of the man who inspires me the most… My Father.

Get Involved

While this is a personal endeavour, I’m going to keep this project open source and available for anyone to borrow, copy, comment on or contribute to. Since I’ll be using other open source libraries to help me with this project I find it fitting to offer up this project to others, and I’m excited by the idea that others may wish to join in. Who knows, perhaps this might generate a small community! Link to the Github repo is here.

You can also follow along by following this blog or signing up for email updates – all links below.

…Off we go!