Today at Ignite, Satya Nadella kicked off the day with the key note. There are four take always that I got from this talk, first, Microsoft is not divesting in developers, Infrastructure, Data and AI. All of those topics came up in the key note but they were not the focus. Under Mr. Nadella, Microsoft set out to reinvented themselves. From where I was sitting today, I can say mission accomplished. Today I am proud to call my self a Microsoft Developer (even if I am getting into Management.) So proud in fact, that I was able to tell the story of Microsoft’s reinvention to a potential intern when they asked me about my teams usage/acceptance of open source technology and products. It is truly a great experience to tell my future employees that we embrace open source and are better because of it. Microsoft three years ago was focused on open source, azure services and AI. Today, they are leaders in those areas and so the focus for the future is changing. The three areas of focus I took from the key note are “Modern Workspaces,” “Business Application” aka Dynamics 365, and Quantum Computing?
The idea of a “Modern Workspace” is the evolution of workspaces into a mobile first cloud first world. It is seamlessly working from anywhere at anytime. This idea is the evolution of Office 365 into something bigger than a software service or offering. The Microsoft Graph is building out connections to help make content and documents. Microsoft’s acquisition of LinkedIn is also becoming part of this graph. One of the coolest announcements was Bing for Business. I sat in on a presentation on Search this afternoon with Naomi Moneypenny and Kathrine Hammervold. For the first time today, I truly understood why search in the enterprise was so hard. Microsoft and Google got search right in a consumer space, it should just work in enterprise, but it turns out the needs for enterprise search are vastly different from consumer search. Bing for Business will tie in to the Microsoft Graph to make enterprise search better. Super exciting!
Dynamics 365 made a big appearance today in the sessions and in the keynote. There was a lot of talk about how this platform is leveraging the Microsoft Graph to improve business outcomes. I attended a Business Apps keynote after the main keynote and they provided a demo and uses of Dynamics 365, with the graph and LinkedIn to make recruiting and talent discovery easier. My observations is the Microsoft is Microsoft is viewing Dynamics 365 as a mature offering and a natural extension of the core offerings of Office 365. I’m looking forward to exploring this product line in the future.
This topic was the biggest topic in the keynote and for me, it came out of left field. There was a full panel discussion on the advances in building a quantum computer. Microsoft announced today that quantum computing was available today in simulators in Visual Studio (on your PC) and in Azure. I think this is a clear signal that true quantum computing is coming, and it is coming at scale soon. This is not some research or pet project at Microsoft. This feels very similar to Microsoft vision of AI 3 or 4 years ago and today it is very real. I can only imagine what Quantum Computing at scale will do for our world if it comes to fruition in the next 4 years.
In my last post (a long time ago) in this series, I described how to solve one of the most basic problems in web development with Node, user authentication. In this post I’m going to talk about another common web development problem, database access. Specifically, using an ORM library to facilitate (and accelerate database access.) Since I first understood the concept of ORM (Object Relational Mapping) and used it in a professional setting, I have been a huge fan. My first exposure to ORM (where I truly understood what as going on) was with Linq to SQL while building a WCF service in C#. It was super empowering when I realized that I could query the database writing C# code! No more string contamination to build queries (which is a major risk to SQL injection attacks) and no more dependencies on other teams to build stored procedures. As a developer, this was extremely liberating! Naturally, when exploring a new framework I want to learn what libraries and tools are available to provide the ORM. For Node, I have chosen sequelize. I’m going to go over defining a model (a table) and querying the database.
In ORM, the model is the representation of your database table. It defines what columns there are, the data they they are, relationships to other models (tables) and, in more advanced cases, things like indexes and constraints. In my experience, there is no ORM library that give you the full functionality on SQL in defining your tables, so the question is, why bother with defining models? The answer is portability. Sequelize (like any good ORM tool) can map to different SQL dialects. It supports MySQL, SQLite, and SQL Server. If you are running automated tests, no problem, sequelize will use the in memory SQLite database. You are deploying a Linux cloud or an on prem Microsoft environment, again, no problem because the library will generate SQL for the targeted environment. An ORM can also generate the database for you and update the schema. This concept is called ‘Migrations’ and is a much more involved topic and I will probably devote an entire post to in the future. Below is a very basic model that describes a database table called ‘WeatherObservation.’ I have defied a few columns to store some data from a Raspberry PI and a DHT 11 sensor.
At Associated Electric, we leverage Microsoft’s Team Foundation Server (TFS) for work tracking and a source code repository. Over the years I have customized and built out work items to help me manage and stay on top of the work my team is doing. I have also experimented with some of the forecasting and scheduling functions of TFS, i.e. burn downs, kanban boards, task boards, etc. The challenge with many of these functions is that the data required to make them work can be intense. A medium sized project may consist of a single TFS work item, with a dozen or so user stories and those user stories may have half a dozen tasks all requiring a half dozen or so fields just to communicate the work, who’s doing it, and when. Getting a burn down and accurate forecasting in TFS requires at least another half dozen fields and if you don’t take the time to think about each of those fields having them won’t provide you with any value. For a project like this you may easily spend a day to two just keying work items in to TFS, not a very exciting endeavor for an analyst or developer. I just settled by using story points on the user stories and not bothering with the task level, but I was always hungry for more out of TFS.
Recently I had to spend an extended amount of time away from managing my team and when I returned, I discovered that things were not running as well as I would have hoped. It is nothing against my team, they are just trying to execute on work the best they knew how. It became obvious to me that I needed more insight into all the work going on in my team and they needed more clarity on priories. Across my team there is a large range of work, everything from writing annual reviews, tier two user support, and implementing ASP.NET web applications. The one thing that all work has in common is that it can all be abstracted into tasks. Tasks that can be scheduled and prioritized in TFS.
I also discovered that in TFS 2015, there are service hooks including a generic web hook that can be called on work item creation and updates. From there my creativity exploded. All of my team’s project milestones and support requests are modeled as User Stories, and using the TFS REST APIs, I created a web hook that automatically created a child task for the implementation of the project milestone, or resolution of the support request. I even plugged into the time tracking data we maintain to populate time estimates into these tasks based on the TFS area. Doing that instantly enabled me to start leveraging the TFS Task board with accurate time estimates with absolutely no additional data entry. I also implemented a web hook for work item updates to automatically close the parent user story when all child tasks are closed. I created a second update action to propagate changes from the parent user story down to all active children, i.e. if you carry a user story forward into the next iteration, the iteration will automatically update for the remaining child tasks. By automating this data entry, my team is now using the Task Board which give me a single place to review all the work going on and give me the opportunity to easily guide and correct work priorities for team members. All of this is happening without any additional data entry on my part. I’ve included the models that you can use to deserialize the JSON calls from the TFS Service hooks to help anyone else get started.