Posted On: Saturday, 18 February 2017 by Rajiv Popat

There was a time when making IDE plugins for Visual Studio was for folks who specialized in art of writing plugins; i.e. folks like DevExpress and Jetbrains. With Visual Studio Code, writing extensions is no longer a mysterious black art. Even regular programmers like you and me can write extensions which solve our little specific problems.

My specific little problem? I hate having to type a semicolon and then hit enter on every line of code that I write. Specially when the IDE is auto-completing my brackets and quotes. For example when I write:

Console.WriteLine("Hello

If I have the C# plugin installed in VS Code, VS code understands my intent and completes the sentence by writing:

Console.WriteLine("Hello[my cursor is here]")

Notice my cursor position in the snippet above? At this point if I need to end the line I hit the right arrow key twice, then type semicolon and then hit enter to continue to the next line.

Technically, in the above example, if my IDE was really smart, I should just be able to type a semicolon where my cursor is, have the IDE understand my intent, move the semicolon to the end of the line and automatically move me to the next line so that I can continue coding.

It's just 4 keystrokes per-line (two right arrows, a semi-colon and an enter), but when you write hundreds of lines of code condensing 4 keystrokes to 1, adds up and goes a long way in making you productive. Actually, it's not so much about reducing the keystrokes as it is about being in the flow and rhythm.

At one point DevExpress CodeRush had this feature; and If I wrote:

Console.WriteLine("Hello;

CodeRush would intelligently complete this as:

Console.WriteLine("Hello");

It was a very fluid experience. I used to love that feature. When I moved to Linux and Visual Studio Code, I lost most plugins like Resharper and Coderush; but then other free Visual Studio Code plugins made up for most of what I loved between Resharper and CodeRush. However, I continued to miss the above feature where the IDE would automatically understand my intent and move my semicolons where they belong.

So, I decided to see how difficult it would be to write an extension which would:

  1. Automatically move the semi-colon to the end of the line even if you type it in between the line (except for special cases like a for-loop or a for-each loop).
  2. Automatically move you to the next line without having to explicitly hit enter.

It took me one day to write the extension. It took me one more day to brand it with a logo and documentation and publish it to Visual Studio Code Marketplace after releasing it on Github. Before I started this extension, I knew nothing about writing Visual Studio Code extensions. Not to mention that the entire development was done on a Linux Laptop. The code was written in type-script and I am not a java-script or typescript guru either.

I think a regular programmer like me being able to write a plugin of this sort, publish it live to a marketplace and have folks download over just a couple of days says more about Visual Studio Code's highly extendable design than it says about my talent. By far one of the more amazing editors / IDE's I've seen in my life.

Because I used an source code of an open source extension on the marketplace to learn how to get started with writing extensions, and I could see an ever growing community of open source extensions on the Visual Studio Code marketplace, I'm also publishing my code on github.

Go ahead and try it out. It has already had a couple of dozen downloads; makes me hugely productive as a programmer when I am inside Visual Studio Code above all keeps me in flow when I write code. I've been fully supporting this plugin and closing bugs as and when I find them or if and when they are reported.

It is called 'Autoend' and is available for free on the visual studio code marketplace.

If you do try it out please drop me your feedback / comments in the comment section of this post and if you find an issue you can always post it on github or you can always drop a line to contact@thousandtyone.com. Happy coding!


Comment Section

Comments are closed.


Posted On: Tuesday, 24 January 2017 by Rajiv Popat

I’ve always been both; a windows and an Ubuntu user. I’m not an OS zealot and I love both operating systems. My work machine runs on Windows and one of my two laptops at home has been on Ubuntu for a very long time. I love windows because it’s convenient. I love Linux because it’s powerful, seriously geeky and free. Which is why when .NET core team announced the ability to run on multiple platforms (including Windows and Linux), the announcement was like music to my ears. This meant that I could be on the OS of my choice (or fancy) and still do development on a language I love (C#).

I had already played around with using Visual Studio Code as a full-blown IDE and had realized that with the right plugins it’s possible to be fully productive on it. The only missing piece now, was SQL server for which I would always need windows. And then the SQL Server team announced that they’ve added support for multiple platforms as well and you can now run SQL Server on Linux.

This meant that I could now use Linux as my primary OS if I wanted; and that was an itch I really wanted to scratch. This of course, meant that it was time to test where Linux was as an operating system when it comes to being a primary operating system for me as a .NET developer. I’ve already had one laptop with only Ubuntu for ages but I just use that machine for surfing, browsing, watching YouTube and sometimes writing books or posts. Using Linux as a daily driver was going to be completely different. This time around, my goal was to find if I can use Linux as my primary operating system.

With this goal in mind, I decided to look at various Linux Distributions and pick one for my work life. This post is more of a running diary of my experience.

Since my organization wasn’t fully ready to move me to Linux (we’re an all windows shop), I decided to get Linux on a VM and spend most of my work hours there for a few days before jumping in fully. Given that I have 8 GB of Ram and over 200 gigs of disk space with an I5 processor I figured I can have substantial horse power in my VM to spend my work life inside the VM. And because this was going to be a work machine I wanted to try out distributions other than Ubuntu – which is what I’ve been using for years. Why? Because I wanted variety and spice in my life.

After looking at various Linux Distros these are what I short listed:

Mint Linux:

Apparently, this seems like the simplest version of Linux that you hop on to when you move on from the Windows world. Under the hood it’s Ubuntu but it looks and feels more like windows. Which is why a lot of windows users who move to Linux and are confused by Unity in Ubuntu like Mint better. For me, if I wanted an OS that looked and felt like Windows, I was already on Windows and I could just stick to it; so, Mint was not something that appealed to me.

Elementary OS:

I went and grabbed a copy of elementary OS and installed it on a Virtual Box VM only to realize that with 8 GB host, and 4 GB on the Virtual Box the OS was still slow and choppy. When I did that however I wasn’t aware how large an impact on performance small settings like enabling 3D acceleration and GPU allocation can have on the overall speed of Linux on a Virtual Machine, so in all likely-hood it wasn’t elementary OS that was an issue but probably bad configuration on my part.

I read a few posts mentioning that Elementary OS works much better with VMWare Player (which is a free product for trying out and personal use) than it does with Virtual Box so I tried it on VM-Player and it was better; but since this was meant to be a work VM, using VM-Player for work related VM’s wasn’t allowed by the VMWare license anyways. So, I dropped the idea and deleted the VM.

At the end of the day, if Mint looks like Windows, Elementary is inspired by Mac and if I loved Mac machines, I would get a Mac. So the choppy performance of Elementary on a Virtual Box and the fact that it’s inspired by Mac, ruled it out as a distribution that I would pick for myself at this point of time. There is a high chance I may have used it if the performance on Virtual Box would have been better and there is a good chance I’ll revisit Elementary sometime in the future because I genuinely liked and appreciated the user interface but for this evaluation I moved on to other distributions.

Fedora:

I grabbed a copy of Fedora and got it installed, up and running in no time. The Gnome based desktop is… for lack of a better word.. extremely classy. The OS was fast and slick and worked extremely well. I was about to settle down with Fedora, when I realized that the chrome installation that I had done on the OS just doesn’t work. No Errors. No warnings. Chrome just doesn’t start. Actually, chrome starts and then disappears. No Windows. No Screens. (I later encountered a similar issue on Ubuntu and fixed this by starting chrome without GPU and then disabling hardware acceleration using chrome settings. For more details on this fix see the ‘Chrome Blackouts’ on Ubuntu section of this post or read on).

I later moved on to .NET installation and realized that DotNetCore keeps giving an initialization error every time I try to do a “dotnet new”. The command fails with initialization errors. This is because Fedora 25 is not supported by DotNetCore. Turns out, there is a bug in .NET Core which makes it require version 52 of ICU Library and Fedora 25 has a higher version. Here is an unofficial fix but I wasn’t able to make it work; and after wasting hours on this I moved back to the familiarity of Ubuntu.

Ubuntu:

After having tried out three different Distributions I ran out of patience (and almost an entire day) and decided to eventually settle down with the known territory of Ubuntu. Unity is a controversial topic. Some folks love the UI, others can’t stand it. I personally have no issues with it since I’ve used Unity for months on my home laptop and am happy with it. But then having tried Fedora, I had also fallen in love with Gnome 3 and because this is Linux, I realized there was nothing stopping me from running Gnome 3 on Ubuntu. So I did just that and grabbed Gnome 3 on top of Ubuntu after I had installed base Ubuntu. Of course, I could have fetched Ubuntu Gnome directly but I like the manual way better because it lets me switch between Gnome 3 and Unity whenever I want to (or at each login!). I also love the Arc theme so I decided to grab that and install that using the Gnome tweak tool. Eventually however with Gnome 3, I settled for the default Adwaita theme.

Note: Version 16.10 of Ubuntu somehow doesn’t seem to play nice with VMWare Player on my machine, and causes Kernel panics and the famous ‘CPU has been disabled by the Guest OS’ error. However, it worked fine with Virtual Box which is nice because Virtual Box was my preference for virtualization to begin with.

Long story short, at this point, I had the familiarity of Ubuntu, and the newness of the Gnome 3 User Interface that I experienced with Fedora. The best of both worlds:

So I was on Ubuntu with Gnome 3, but I was still far away from making this machine my daily driver. There were multiple other hoops that I had to jump to make this machine usable as a daily driver.

Sound Card Issues:

With Ubuntu installed on my virtual machine; I realize that sound doesn’t work with Ubuntu on Virtual Box. Turns out, after a certain version, Virtual Box doesn’t seem to pick up the right sound card drivers to be used for host and guest operating systems and you need to pick them up manually. For me what worked was Windows Direct Sound on the host and Intel HD Audio on the guest operating system.

I then go to the sound settings of Ubuntu and crank up the volume to maximum value allowed. Actually, I crank it up to 140% of what’s allowed:

Sometimes when I want to sound to go louder I have to go to the terminal and crank up the sound even louder with aslamixer command:

And then the sound works fine. The next thing I was going to need if I was going to use this machine on a daily basis was a stable browser like Chrome.

Chrome Blackouts:

I go ahead and grab chrome and am just about ready to work; when I see a blank black screen each time I start chrome. To fix this I start chrome without a GPU using the command:

google-chrome -disable-gpu

From my terminal window and once chrome starts I disable “Use hardware acceleration when available” by going to Chrome Settings and then into Advanced Settings of Chrome.

Note: This same fix works on on Federo where the chrome window disappears after you click on the chrome icon.

Sluggish Speeds:

My Virtual box is now up and running; I have a browser and sound; but the performance is still sluggish. I crank up the GPU to 128 MB and select ‘Enable 3D Acceleration’ from the virtual box settings which considerably speeds up the virtual machine and makes it fast. I also grab CompizConfig Settings Manager so that I can tweak animations and I disable them to make my system move faster. This speeds up my Virtual Box considerably and makes it actually extremely usable.

But What About Email?

With the basic setup of the OS complete, my next concern is email. Because we use Office 365 at my organization and Exchange at my client’s organization, I needed something that works seamlessly with Exchange Web Services and while evolution comes pre-installed in Fedora, Unity comes preloaded with Thunderbird; which, based on what I’ve read doesn’t work with Exchange services as of this writing. So I grab a copy of Evolution in my Ubuntu and configure my Office 365 emails with it.

Configuring Office 365 emails was relatively easier, though Evolution does tend to loose your preconfigured account the first time you configure them. If that happens open your process monitor, kill all threads of evolution and start fresh and there is a high chance you might find your accounts back. I ended up creating the accounts thrice and then found them all when I killed the evolution threads and started evolution fresh. Then I deleted all of them and re-created a single fresh account. This was of course a one-time issue and things have been fine once the accounts are configured.

Configuring Office 365 accounts were easy. With on premise Exchange accounts however things get a little more complex to troubleshoot. Because my client uses NTML based authentication and Evolution detected that as Kerberos; I kept getting the following error message:

The reported error was "No response: SPNEGO cannot find mechanisms to negotiate".

Finding out what the issue here was mostly a hit and try exercise where I tried to use basic authentication and that didn’t work so I moved to NTLM and that worked.

Site Note: the lack of support for Exchange in mature email clients like Thunderbird and the fact that you have to shell out 10$ a year to get an Exchange plugin in Thunderbird is a little disheartening. I have no issues with paying developers for the hard work they put in, but paying to accomplish something as simple as checking email when your entire OS is open source (and free) and every other app on your machine is open source is, for lack of a better word… a little… ironic. So I decided to grab Evolution which supports Exchange free out of the box and battle out the issues. And it paid off. Evolution has been working well both with Office 365 email account and with Exchange email account and I am actually starting to like it a whole lot.

For those of you who haven’t used evolution, the only thing I missed, compared to outlook was free text search. Turns out, Evolution has a very powerful advanced search and you can also turn on expression based searches:

Visual Studio Code:

With everything else configured I set out to load Visual Studio Code (the primary reason why I started to spend a day on making myself a Linux Work VM). Getting Visual Studio Code itself is super easy. You just download the package and you install it using the Application Manager. However, when I start Visual Studio Code I get a blank black screen. This reminds me of the black window in chrome so I go ahead and look for a similar fix. You just start Code without the GPU:

code --disable-gpu

But because we cannot be doing this each time we add this as an alias in our ~/.bashrc file (or in my case I just add it to my ~/.bash_aliases file which bashrc file references which just helps keeps things clean):

alias code='code --disable-gpu'

Once you’ve added the line you need to close your terminal and start it afresh for the alias to kick in. Caveats? First, you can’t open Code from the Icon in Gnome. Second, you can’t do a “Code .” and expect “.” to represent the current folder you are in when working on the terminal. You need to open Visual Studio Code and then do a File / Open… which is not that bad.

Next I follow these instructions to install DotNetCore on Ubuntu 16.10. Then Install the usual plug-ins and I am in business:

And so, with the development environment in place we are now going to need a DB to work with.

SQL Server:

SQL Server installation was by far the smoothest. You just follow the instructions here and then you follow these instructions for installing the client tools. SQL Server claims to require 4 GB RAM but I barely notice any slowdowns post install and the DB has been running blazing fast. I’m actually really impressed with the DB performance thus far.

There are no UI tools like SSMS for SQL Server on Ubuntu so I grab the DBeaver and use that as a visual editor for DB design.

To be honest the performance of DBeaver in a Virtual Box with 4 GB of RAM is extremely sluggish and it tends to slow down the entire VM. At the danger of offending and triggering Eclipse fans, it’s a trend I’ve seen with a lot of other applications that are built on Eclipse. I then move to SquirreL SQL which is light weight but only provides query capabilities and no Drag and Drop DDL capabilities.

I’m still looking for a visual database development tool but for now, between the command line, SquirreL and DBeaver I should be good.

And A Shared Folder with the Host OS:

If you’re going to run in a VM Mode you will probably want a shared folder with the host OS which you can mount automatically so that anything you save there is also available when you are not using Ubuntu. I do that by sharing a specific folder on my host OS with Ubuntu using Virtual Box settings:

And then I run into permission issues where I cannot access this folder from Ubuntu which I solve by adding my current user to the vboxsf group.

And I’m set for now. All ready to take my newly created VM for a spin and because my VM is just 12 Gb, I decide to take a full backup of my VDI file instead of taking a snapshot. My entire disk file size after installing everything I need, is about 12 GB, so it’s still a file I can carry on a 16 GB drive.

My Overall Experience:

I’ve been a happy Linux user on and off on at-least one personal laptop for over 15 years and Linux has come a long way, but even today, every time I decide to spend a day with various Linux distributions to see where they are and play around with them or try to expand the scope of Linux in my life, I encounter some hurdles which I have to jump and I eventually end up learning new things. That is what makes me angry at Linux sometimes. It’s also what makes me love Linux most of the times. Let’s just say it’s a healthy relationship – the kind that you have with your friends, wife or your kids. :)

If you’re an average office user who is Installing Linux on a bare metal modern day laptop, Linux has indeed come a long way, is very usable and your learning curve might be minimum. You probably can get started almost as easily as you do with windows. But if you plan on using Linux as a primary work machine (especially in a Virtualized environment because your office is on windows) there is a high chance you’ll hit a few bumps but among the dozen odd distributions of Linux and a couple virtual machine solutions and a couple of dozen workaround, you should not take more than a couple of hours to be completely up and running and that (genuinely; without the slightest tone of sarcasm in my voice) is not such a bad thing at all.

My overall experience after spending a day playing with Linux with the idea of using it as my primary work environment is that it has come a long way and I encourage each one of you to try it for a month as a primary work OS; even if it happens to be on a VM! With Visual Studio Code, .NET and SQL Server all running on it, there should not be any reason why you aren’t taking Linux for a test drive.

On another different note, I am loving the new Microsoft for making things like this even possible. It takes a lot of courage for a company of Microsoft’s size to embrace a truly open world where everything they build from Development platforms to development tools and even databases run on multiple platforms.

Here is a big thumbs-up to both the DotNetCore team and the SQL Server team for embracing openness. When we have open choices like these for developers, everyone wins. I’m genuinely impressed with what I have experienced and I’ve been on this VM as my primary machine for a week and nothing has broken. Pure Awesomeness.

Update: After using the Virtual Machine for a few days I finally took the plunge and decided to move to Linux on my work machine. All the GPU issues I had to work around in this post are non-existent on a bare metal install and the same Ubuntu + Gnome combination has been working really well for me during the past few of days.


Comment Section

Comments are closed.


Posted On: Tuesday, 10 January 2017 by Rajiv Popat

I’m obviously late to the party but I’ve been hooked on to Visual Studio Code both as an editor and a complete IDE for developing .NET Core Applications and all I can say about Visual Studio Code and .NET core is that I am loving everything I see.

Getting up and running with .NET Console applications is really easy with Visual Studio Code and something I'll probably cover in a different post. This post is focused more around building and debugging ASP.NET Core applications using Visual Studio Code. In the posts to come we will build a simple real life application using .NET core and Visual Studio code.

I recently needed a simple application where I can store excerpts from various books and research papers I read for future reference and so that's the project I'm going to work on for the purposes of these posts. The code that we build during this series will eventually be open sourced and posted on GitHub.

In this post we get started with a simple ASP.NET Core project using Yeoman and the .NET Core CLI and we will then debug it using Visual Studio Code. Why do all this when we can build the same application using Visual Studio 2015? Even though we will build this application in Windows we want the toolset and code to be portable so that we can easily move to Linux or Mac and start developing there whenever we feel the need to do so; which is why we won’t use anything that we cannot use in a Linux or a Mac environment.

In fact, once we get through a couple of posts, we will actually move to a Linux machine and start developing there.

Let’s start by creating an ASP.NET Core app and setting up the debugging using Visual Studio Code. You can of course do this using two ways:

Approach #1: The .Net Core CLI:

This is probably the simplest and provides you with a nice clean ASP.NET Core application. Pretty similar to doing “File / New / Web Application” in Visual Studio if you happen to be a Visual Studio developer in the past. Some folks may love this because it’s straight forward. Others may not like it because it bundles a bunch of things (like the Entity Framework, Membership and stuff you may not even be interested in using). Plus as of now, it doesn’t seem to integrate things like bower out of the box (more on this later).  However, if you are looking to get up and running with a simple ASP.NET Core app up and running quickly you can start a command prompt window, go to the folder you want to create the project in and do:

dotnet new -t Web

This creates a simple Web Application project. To fetch all the dependencies the project needs you would have to do a:

dotnet restore

And to run the project (which also builds it automatically) you would do:

dotnet run

This would start the development web server and host the application which means you can now access it using http://localhost:5000:

And if you open your browser and hit the URL you have the application running:

We’ll come to the debugging part in a minute. Of course if you are not happy with a bunch of extra things that were added to your environment you can of course get more control over the templates that you use for stubbing out your application using Yeoman, which of course brings us to the second way of stubbing out your ASP.NET core applications.

Apporach #2: Yeoman Templates:

You will have to install Yeoman before you begin with this, which of course would mean installing NPM (and the simplest way of doing that is installing Node JS). Yeoman also fetches your Javascript files and files like the bootstrap.css from the right locations using Bower. So you are better off installing bower up front before you proceed.

Once you have Yeoman installed you can do a:

yo aspnet

From your command prompt once you are in the folder where you would like to create the project. Yeoman of course gives you larger control over the project you stub out by letting you pick from a host of templates that you can use (which would in turn decide which dependencies get installed):

In the above sample / screenshot we have an option of picking from different templates. We can pick a basic web application without Membership and Authorization OR just “Web application” which has everything (including membership and authorization pre-configured).  Of course with this I also get to pick between the UI framework that I would like to use for my project:

In the above example I’m going Bootstrap. Once done you would specify the name of the project and once that is done you can go ahead with::

cd research
dotnet restore
dotnet run

In the above command we switched to research folder because yo command creates a folder with your project name. Once you run the code with ‘dotnet run’ You get a similar application this time as you did with .NET CLI only this time around you don’t see the Login link on the top right corner of application:

Now that we have the application up and running (with both .NET CLI / Yeoman, depending on what you pick), let’s get to debugging it using Visual Studio Code.

Debugging Using Visual Studio Code:

The more I use Visual Studio Code the more I seem to like it. It’s light. It’s elegant. Works on multiple platforms and what I love about it is the ecosystem of plugins that turn a lighting fast editor into a full blown IDE! If you don’t have the IDE, grab a copy from here and install it on your machine. Now from the command prompt you can navigate to your project folder and type a “code .” (without the quotes) and you should see your project open. The “.” of course, stands for the current folder and in Visual Studio Code you don’t work with projects / solutions, you open specific folders. Which means if opening the project from command prompt doesn’t make sense to you, you can open Visual Studio Code and Open a Folder from File / Open menu. The moment you open the codebase in Visual Studio Code, it looks for required assets and asks you if it should import those. Click on Yes.

Like I said before, it’s the plugins that turn this code editor into a powerful IDE. I’ve jumped to the plugins tab, searched for and have grabbed the the following plugins I need to get started:

At this time if you were using Yeoman and had bower properly installed Your Launch.json should have the following value correctly set and you should be able to debug your application using debug tab and selecting “.NET Core Launch (Web)” from the debug type drop down and hitting the play button of the familiar F5 key:

If you started with .NET CLI tools (instead of Yeoman), you may not automatically get all bower dependencies configured in your bower.json file like Bootstrap and JQuery. So when you run the project with “dotnet run” it runs fine but when you Debug using Visual Studio Code so see things like Bootstrap and JQuery aren’t properly imported and you see your application looks like this (and the Javascript functions inside the application don’t work either):

This is where is pays to understand how Bower really works and how these templates are generated. The reason why your application runs fine when you execute it using “Dotnet Run” and doesn’t when you execute it using Visual Studio Code is that both (Dotnet Run / Visual Studio Code debugging) execute the applications in different modes. While “DotNet Run” executes the application in production mode, Visual Studio Code runs it in debug / development mode.

If you open your _layout.cshtml file you would notice that the template has generated a layout file that picks up bootstrap, Jquery and other dependencies from “~/lib” folder for Development environment and directly from ASP Net live CDN in case of production and staging environments. Since we are running in Development environment when debugging from Visual Studio Code we need the dependencies to be present in the “~/lib” folder.

If you check the wwwroot folder however you’ll see that the lib folder is missing:

To get the dependencies in the right folder we’ll use Bower to download the dependencies in the right folder. Where bower downloads the dependencies is defined in “.bowerrc” file:

And as the above picture shows our .bowerrc file does have the right location. We also have the bower plugin installed. So Let’s hit CTRL + P and Type “> Bower” in the search bar  that pops up:

Now you get a list of bower commands from which you can select Bower Install and hit enter:

The moment you do bower should grab all required dependencies for you and you should now see a new lib folder with the right dependencies:

And you are also able to debug the application properly now with Bootstrap, Javascript and other dependencies working fine:

Personally, I like Yeoman primarily because it provides a larger choice of templates and runs the “bower install” command pretty much automatically (assuming you have bower installed) but both .NET Core CLI and Yeoman should help you get started quickly with your first ASP.NET Core application. Both work across platforms and which one (Dotnet CLI / Yeoman) you use is just a matter of which templates you prefer.

As far a Visual Studio Code is concerned I love it. While Visual Studio 2015 Professional versions manage some of these tasks out of the box, Visual Studio Code is really nice because for me it hits the fine spot between showing me what’s happening under the hood at the same time keeping me sufficiently productive. This post covers two ways of getting up and running with an ASP.NET core project and you can use either of the two and all it takes us is a minute to get started with the setup and debugging on a new ASP.NET core project using Visual Studio Code.

In the next post we’ll get started with the actual application using ASP.NET Core where we will be creating a simple application where you can store experts from various books and research papers that you might be reading for future reference. As the series of posts proceeds I’ll dump the code on GitHub and also try and host it using the cheapest most scalable cloud options.


Comment Section

Comments are closed.


Posted On: Saturday, 07 January 2017 by Rajiv Popat

Lately, there is a fad that’s been going around about the whole idea of ‘work life balance’.

Anyone you talk to claims that they are overworked and are finding it difficult to strike a ‘work life balance’.

But a question really worth reflecting on is, are we really overworked? Or are we becoming downright lazy (not because we don’t love hard work but) because we are unable to find any creative outlets in the work that we do?

There is this myth of folks working 80 / 90 hour work weeks and feeling the urge to strike a healthy work-life-balance. Laura Vanderkam decided to question the validity of the claims of people who say they are working 80-hour work-weeks and came out with some striking revelations in her book, 168 Hours - You have More Time Than You Think. For her book, Laura reached out to University of Maryland sociologist John Robinson about his research on the reality of work hours and the figures she found were astounding:

University of Maryland sociologist John Robinson and his colleagues analyzed people’s estimates of how much they worked, and compared those to the time diaries, they found that the more hours people claimed to work, the more inaccurate they were. You can guess in which direction.

Almost no one who claimed a 70-hour workweek was underestimating. Indeed, the average person who claimed to work more than 75 hours per week generally logged about 55. When I contacted Robinson recently, he sent me a working paper he was drafting using more recent numbers, from 2006-2007. The time spent working had come up a little for people whose estimated hours showed workaholic tendencies, but even so, the average person who claimed to be working 60-69 hours per week was actually logging 52.6, and the average person claiming to work 70, 80, 90, or more hours was logging less than 60.

Laura’s hypothesis is that people overestimate their workweeks. After all, we tend to overestimate the time we spend in things we don’t enjoy and underestimate the time we spend in things we love doing. Which is why most people underestimate their television watching time and overestimate their workweeks. Remember how, in your childhood, your study time never used to end but the study breaks used to run out in no time? It’s the same concept of relative speed of time when it comes to most people feeling that they are overworked. Bottom line, we aren’t spending 80-hour workweeks. We just feel we are spending 80 hour workweeks because we don’t love our work as much as we love watching television. And we don’t connect to the work that we do either. No amount of work life balance can fix that.

While Laura’s writing style is professional, this article is a bit more unforgiving and hits the hammer right on the nail:

Are you feeling drained and listless at work? One of the biggest reasons we find ourselves frustrated from our jobs is because we don’t have an outlet for things that are important to us but we need to keep at it because of bills and general adult responsibilities.

Us millennials have two major problems hanging over us: crushing debt and the desire (with no outlet) to do something meaningful. Some of us went to college with the hopes of a guaranteed job when we graduated. It’s now a hilarious thought on hindsight.

Unless you graduated with a degree in one of the STEM disciplines, you probably didn’t land your ideal job right out of college. And that’s likely the reason why you’re stuck in a job you feel has no real purpose, and why your student debt is still hanging over your head.

If you feel Laura’s claim and the article above isn’t scientific enough, Dan Ariely, Emir Kamenic and Drazen Prelec have a scientific paper where they try to decipher man’s search for meaning using Lego pieces. In this social experiment they paid college students money for making Lego Bionicles. There were two conditions in the experiment, the meaningful condition, where the Bionicles built by students would be kept on a table while they made new ones and the meaningless condition (i.e. the Sisyphus condition) where the experimenter would dismantle the Bionicles in front of the student, making it very clear to the students that their work served no purpose, before giving them new Lego pieces. Here is how the paper describes the two conditions:

In the Meaningful condition, after the subject would build each Bionicle, he would place it on the desk in front of him, and the experimenter would give him a new box with new Bionicle pieces. Hence, as the session progressed, the completed Bionicles would accumulate on the desk.

In the Sisyphus condition, there were only two boxes. After the subject completed the first Bionicle and began working on the second, the experimenter would disassemble the first Bionicle into pieces and place the pieces back into the box. Hence, the Bionicles could not accumulate; after the second Bionicle, the subject was always rebuilding previously assembled pieces that had been taken apart by the experimenter. This was the only difference between the two conditions.8 Furthermore, all the Bionicles were identical, so the Meaningful condition did not provide more variety than the Sisyphus one.

The results of the test are astounding. Here is what they found:

Despite the fact that the physical task requirements and the wage schedule were identical in the two conditions, the subjects in the Meaningful condition built significantly more Bionicles than those in the Sisyphus condition. In the Meaningful condition, subjects built an average of 10.6 Bionicles and received an average of $14.40, while those in the Sisyphus condition built an average of 7.2 Bionicles and earned an average of $11.52.

The Wilcoxon rank-order test reveals that the reservation wage was significantly greater in the Sisyphus than in the Meaningful condition (exact one-sided p-value = 0.005). The median subject in the Sisyphus condition stopped working at $1.40, while the median subject in the Meaningful condition stopped at $1.01. Hence, the difference is a visible count in both conditions economically as well as statistically significant, as the Sisyphus manipulation increased the median reservation wage by about 40 percent.

Put simply, remove meaning out of a person’s work and they will work less even at a 40% higher payout. Not to mention the fact that the person would end up being much less productive, feel tired much quicker, would burn out and would give up much faster.

So the next time you get this urge to establish a stronger work life balance and you feel that we are overworked, it might be a good idea to sit down and reflect on if you are really doing 80 hour work weeks or are you just overestimating how overworked we are? Are you in a job that makes you excited or do you need to find additional work that we love doing on the side? Are you relying on your jobs to provide you meaning when your organization expects you to be toiling like Sisyphus?

There was a time in my life when I would have said that you can find meaning outside your work life – for example – working on open source projects, participating in community efforts, contributing in online discussions, but as I grow I am starting to realize that 8 hours a day (and 40 hours a week) of prime productive time, is considerable enough for you to start looking for ways to change your organization (or change your organization); particularly if your work isn’t providing you sufficient excitement, challenge, flow and meaning.

Of course, till the time you can change (or change) your organization meaningful work outside your paid job can provide the much needed boost to keep your creative sprits alive.

Either ways, if you feel you are overworked, constantly tired and don’t like the idea of waking up in the morning to go to work, it’s time to stop blaming yourself and take a long hard look at the work you are doing and ask yourself one basic question – are you enjoying yourself? And if the answer is no, what are you actively doing to change that?


Comment Section

Comments are closed.


Posted On: Tuesday, 27 December 2016 by Rajiv Popat

This happened to all of us way back in our school days. The teachers would label the hard working, high scoring, intelligent students from the ones who were ruckus creators; and then they would treat those two groups differently.

You knew the ruckus creator back in nursery, the guy usually remained a ruckus creator all the way through high school while the hard working scholar, would top almost every class growing up. Information about who was a star student and who was a ruckus creator flowed from teacher to teacher as you moved from one class to another. If you were the ruckus creator who wanted to genuinely change you were screwed and you virtually couldn’t!

“Is he good?”---  that’s a question often discussed between managers when onboarding a person on a project. The idea and the central premise being, if the guy has worked with a different manager in the past and you happen to know the person he worked under, why not take a quick input from that manager before onboarding the person on your project?

Adam M. Grant, shatters this myth and methodology of ‘searching for star performers’ in his book ‘Give and Take’ where he takes the relationship between ‘performance’ and ‘reputation through word of mouth’ and turns the causation between these two upside down. He explains:

Harvard psychologist Robert Rosenthal, who teamed up with Lenore Jacobson, the principal of an elementary school in San Francisco. In eighteen different classrooms, students from kindergarten through fifth grade took a Harvard cognitive ability test.

The test objectively measured students' verbal and reasoning skills, which are known to be critical to learning and problem solving. Rosenthal and Jacobson shared the test results with the teachers: approximately 20 percent of the students had shown the potential for intellectual blooming, or spurting. Although they might not look different today, their test results suggested that these bloomers would show "unusual intellectual gains" over the course of the school year.

The Harvard test was discerning: when the students took the cognitive ability test a year later, the bloomers improved more than the rest of the students. The bloomers gained an average of twelve IQ points, compared with average gains of only eight points for their classmates. The bloomers outgained their peers by roughly fifteen IQ points in first grade and ten IQ points in second grade.

Two years later, the bloomers were still outgaining their classmates. The intelligence test was successful in identifying high-potential students: the bloomers got smarter—and at a faster rate—than their classmates.

Based on these results, intelligence seems like a strong contender as the key differentiating factor for the high-potential students.

But the Harvard cognitive ability test, was not a way to identify students who were going to be bloomers in the coming years! It was nothing more than a trick experiment designed by the psychologist to prove his hypothesis. Adam explains:

The students labeled as bloomers didn’t actually score higher on the Harvard intelligence test. Rosenthal chose them at random.

The study was designed to find out what happened to students when teachers believed they had high potential. Rosenthal randomly selected 20 percent of the students in each classroom to be labeled as bloomers, and the other 80 percent were a control group. The bloomers weren’t any smarter than their peers. The difference “was in the mind of the teacher.”

Yet the bloomers became smarter than their peers, in both verbal and reasoning ability. Some students who were randomly labeled as bloomers achieved more than 50 percent intelligence gains in a single year. The ability advantage to the bloomers held up when the students had their intelligence tested at the end of the year by separate examiners who weren’t aware that the experiment had occurred, let alone which students were identified as bloomers. And the students labeled as bloomers continued to show gains after two years, even when they were being taught by entirely different teachers who didn’t know which students had been labeled as bloomers. Why?

Teachers’ beliefs created self-fulfilling prophecies. When teachers believed their students were bloomers, they set high expectations for their success. As a result, the teachers engaged in more supportive behaviors that boosted the students’ confidence and enhanced their learning and development. Teachers communicated more warmly to the bloomers, gave them more challenging assignments, called on them more often, and provided them with more feedback.

In the book, Adam describes how the same experiment was repeated again and again, in fields like sports, workplace and even the armed forces and how the same results stood true each time.

As a person who manages teams of capable developers, I have always intuitively believed in the idea of self-fulfilling prophecies, but seeing a quantification of how strong our biases and influences are and how they end up effecting the people who work with us, is a little… scary, to say the least.

So, the next time you ask another manager about the efficiency and capability of an individual that you are onboarding without even evaluating the person on his / her own merit, be aware that you might be unknowingly setting up the stage to create and then support a self-fulfilling prophecy.

What’s even more scary is the idea that a lot of new budding managers find it hard to delegate work to their team members because they believe the team members would not be able to do those tasks as well as they do the tasks themselves. Put simply, these managers start out with the assumption that their team is not as effective or productive as they themselves are. When you put that in perspective with the idea of self-fulfilling prophecies and the real power these prophecies have, where does this leave you as a manager? Where does this leave your team? Just a little something to think about.


Comment Section

Comments are closed.


Posted On: Tuesday, 22 November 2016 by Rajiv Popat

Productivity as a topic has been very near and dear to my heart. Unlike most people my obsession with productivity tools, tips and techniques doesn’t revolve around the fact that my productivity allows me to squeeze a couple of hours worth of extra work into my regular workday. For me productivity is a way of life, and a way of doing more of what you want to do (or what you were meant to do) and less of what others want you to do.

Chris Bailey, as an author caught my attention because he had the courage and conviction to take an entire year off from his prime years and study productivity. The Productivity Project chronicles his learnings in that one year. The book is filled with experiments (some scientific, but most others self-applied) where  the author makes himself a guinea pig and tries out some sane and some insane productivity tips and tricks.

The book begins with practical advice and claims to be able to take you from here:

BusyDay

To here:

WellManagedDay

The book was a validation that I’m not the only one who is crazy enough to try and measure every single waking hour of their life! Other, sane authors have done it too. .

Where the book started grabbing my attention was the moment Chris laid out his definition of being productive. Chris describes productivity using a simple idea of living with deliberateness and intention. He explains:

I think the best way to measure productivity is to ask yourself a very simple question at the end of every day: Did I get done what I intended to? When you accomplish what you intend to, and you’re realistic and deliberate about the productivity goals you set, in my opinion you are productive.

If at the beginning of the day you intend to write a thousand great words, and you do, you were productive.

If you intend to finish a report at work, ace a job interview, and spend quality time with your family, and you do, again, you are perfectly productive.

If you intend to relax for a day, and you have the most relaxing day you’ve had all year, you were perfectly productive.

An idea the likes of David Allen have been trying to propagate for years. The book is also full of real world practical advice ranging from simple advice like Emptying your brain, using the Pomodoro, the importance of exercise, the importance of food and the perils of Attention Hijackers like mindless surfing, but the real power of the book lies in how simplistically Chris describes some of the complex things that end up affecting your productivity. Take for instance this passage on how Sugar effects your productivity:

On a neurological level, you have mental energy when you have glucose in your brain. When you feel tired or fatigued, more often than not it’s either because your brain has too much or not enough glucose to convert into mental energy. Research has shown that the optimal amount of glucose to have in your bloodstream is around 25 grams—about the amount of glucose in a banana. This exact number isn’t all that important, but what is important is that your glucose levels can be either too high or too low.

Since unprocessed foods (in general) take longer to digest, your body converts them into glucose at a slower rate, which provides you with a steady drip of glucose (and energy) over the day—instead of a big hit of energy followed by a crash. In a way, processed foods are predigested for you by machines. This is why your body converts them into glucose so fast, and why a donut doesn’t provide you with nearly as much lasting energy as an apple.

We all know processed foods harm and effect our productivity and health, but simple explanations like these go a long way in understanding what foods to pick and provide the much needed nudge to make the right decisions. The book is also filled with surprising and mind-blowing passages which are fascinating (and somewhat philosophical) to read. Take for instance this passage on the history of time itself:

If you were around before the industrial revolution ended in the early 1800s, you wouldn’t have measured time down to the minute, not only because you didn’t have the technology to do so, but also because you didn’t need to. Before the industrial revolution, measuring time wasn’t as important, and most of us worked on the farm, where we had way fewer deadlines, meetings, and events to sequence than we have today. In fact, until the first mass-market, machine-made watches were produced in the 1850s, timepieces were unobtainable by pretty much anyone except for the super rich, and most of us charted the day’s progression by looking at the sun. Because we didn’t measure time with a clock, we would speak about events relative to other events. In the Malay language, there is even the phrase pisan zapra, which roughly translates to "about the time it takes to eat a banana."

The book of goes on to describe how in merely about 150 years, we went from not caring about time to having a huge industry and pretty much most of our lives run around set timings. The book covers productivity from more aspects than any other productivity book I’ve read thus far does. Even for an avid reader of books on topics like time management, neuroscience and psychology, a lot of the concepts the book explains (e.g. removing triggers to change your habits, exercising your focus mussel etc.) are not new at all but they are explained with a unique personal insight that I enjoyed thoroughly.

After David Allen’s GTD, if you have room for one more book on productivity, this is the book you should definitely pick up. I would give it a 5 on 5!


Comment Section

Comments are closed.


Posted On: Tuesday, 01 November 2016 by Rajiv Popat

If you’re interested in and working with Javascript frameworks, you’ve probably heard of Aurelia by now. It’s a compelling competitor to frameworks like Angular and React.  While getting started with Aurelia itself seems pretty straight forward, getting Aurelia working with Type-Script and making it all work Visual Studio 2015 has it’s own share of hiccups.

The aurelia team provides starter projects that they call skeletons that you can download and get up and running really quickly. However when I tried using them, the skeletons seemed to have issues which were both time-consuming and frustrating to resolve. Even the skeleton that was supposed to run with .NET MVC (and had a “.sln” solution file) would not even compile without errors. And these skeletons come with a lot more than what you would like have when you are just trying to get an initial hold of Aurelia and type-script. This left me with no other options but to star fresh and create my own basic skeleton where I can try out different Aurelia features.

If you’ve tried to get started with Aurelia + Typescript and you are a .NET programmer who lives inside Visual Studio, the goal of this post is to get you up and running with Aurelia and Typescript inside Visual Studio 2015.

To being with you’re going to need Typescript working inside your Visual Studio 2015. The simplest way I’ve found to do this is, just un-install older versions of Visual Studio 2015 and install Visual Studio 2015 Update 3 from scratch. You could use that link, but then if you have an MSDN subscription you are better of downloading an offline ISO from there and using that, which is what I did.  Initially, I tried in place update of Visual Studio 2015, and the installer kept crashing for some reason (this of-course could be because of the fact that I was on a weak Wi-Fi).  The MSDN ISO (a 7 GB download) worked smoothly after an uninstall of my existing visual studio; followed by a fresh install.

With Visual Studio 2015 Update 3 (with Core 1) loaded, you’re also going to need Typescript support inside Visual Studio so your Typescript files are compiled and converted to JS files each time you save them. To do that you can grab the Visual Studio Typescript plugin from here and install that. You will also need Node Package Manager (NPM) working on your machine and the simplest way to do that is download and get  Node JS installed on your machine.

With that done we’re ready to start our first hello world project with Typescript + Aurelia.

As I said before, the easiest way to do this would have been to download and use the skeletons, but given that the skeleton’s provided by Aurelia team didn’t work for me;  I was left with no option but to build my Aurelia app from hand and get started. Honestly, building your first app by hand actually works out better because it gets you a fresh new insight into many underlying concepts that you will typically not have to pickup if you use a ready made skeleton instead.

Since we’re going to be working with Visual Studio 2015 as our IDE of choice, let’s go and create a blank ASP.NET Web Development project inside Visual Studio in a folder of your choice. Open the solution file and keep the solution open in Visual Studio as you proceed with the below.

Once the project is created start command prompt and go to the specific location where you created the project. Note: go inside the project folder (not the folder with contains the .sln file – but the one that has your web.config file.):

Once there start by typing in the following commands:

npm install jspm
jspm init

JSPM is the JavaScript Package Manager which let’s you fetch and use various  Javascript modules you will need to get started with Aurelia. In the above diagram we switch to the project folder (shown in the screenshot + code snippet),  do a NPM install of JSPM which fetches JSPM on your machine. Once there we initialize jspm in our project folder (jspm init) where it will create new project asking you basic questions:

We select the default answer by just hitting enter, except for picking the Transpiler, where we will use TypeScript instead of the default Transpiler JSPM uses (babel).

Once that is done we continue with rest of the defaults and finish our “jspm init”.  We then install the required underlying frameworks in our project by doing:

jspm install aurelia-framework aurelia-bootstrapper bootstrap

This should pull all files pertaining to Aurelia framework, the aurelia bootstrapper and the bootstrap framework (which are the very basic things we need to start a simple web application with Aurelia and Typescript). If all goes well your folder structure should look like this inside Visual Studio with “Show All Files” selected in Solution Explorer:

We now need to start writing code for our project. The first thing we do is right click the config.js file in the solution explorer and say “Include in Project”.

Once done we open the file and add the highlighted line :

This tells the Transpiler to look for code in the “src” folder. Of course we don’t have that folder in our solution so we create that by right click solution explorer and clicking create folder and adding the src folder. Now we have a place where we will write our Aurelia code. But any web-server that we will also use to host the code will need a startup file to begin running the application – which usually is index.html. So let’s code the following index.html by hand:



 
    Aurelia
 
  aurelia-app>
   
   
   
 

This is a simple standard index.html file most Aurelia applications will typically need. We are adding two JS files that aurelia needs. The first is system.js and the second is where our configuration is stored (config.js). We then use System.import (from system.js) to import the aurelia-bootstrapper.

Also note the “aurelia-app” tag in body. A few important pieces are getting connected in the above code and aurelia is using convention to connect the pieces. The index.html tells aurelia to use the config.js file. And as we’ve seen before, the line we added in config.js tells aurelia that the custom aurelia code would be in the “src” folder. The “aurelia-app” tells aurelia to be default look for “app.js” as an entry point. Note: we haven’t specified app.js – but the aureila-app tag itself (by convention) tells aurelia to use app.js by default. You can of-course override the convention but that’s for another post. Right now, let’s just create a app.js in the source folder.

We could drop an app.js file inside source, but remember we are planning on using TypeScript throughout the project, so instead of an app.js, we will use “app.ts”. We will work on the typescript file (app.ts) and let Visual Studio to generate the .js fie each time we save the “.ts” file. So let’s right click “src” folder and add a Typescript file and call it “app.ts”.  Because typescript provides added intelligence and compile time validations, it needs what we call typing files which allow visual studio to validate your TS code. Which is why Visual Studio will ask you this:

Going ahead we will grab out typing files manually so say no to the above for now and let’s proceed. We’re going to talk more about typings later in this post.

In your blank app.ts add the following lines:

export class App
{
    Message: string;
    constructor()
    {
        this.Message = 'Hello World';
    }
}

Note in the above code, we have a simple type-script class (this will translate to a JS function) and string Variable called Message. In the constructor we add a default value to the field. This app.ts, will get translated to app.js when we save it and will act as a view-model. Now that we have a “View Model” let’s go ahead and make a View. Aurelia view are simple HTML pages surrounded by a template tag. So inside “src” folder let’s add a new app.html and add the following lines:


The Message that we set in the View-Model should now flow to the view and when you compile and run the project at it’s root, you should now see:

Congratulations! You have your first Aurelia project with TypeScript running now. Now let’s do something more meaningful and try to create a screen to add customers to a List of Customers. To do that let’s start by making a Customer class by adding a “Customer.ts”. For now let’s modify the blank Customer Class so that it has a CustomerName attribute and looks like this:

export class Customer
{
    CustomerName: string;
    public constructor()
    {
    }
}

We will now need to go ahead and use the Customer inside app.ts and then create a function inside app.js that allows us to create a customer to the list of customers. To do that we modify our app.ts:

import { Customer } from './Customer';
export class App
{
    CurrentCustomer: Customer;
    Customers = new Array();
  
    constructor()
    {
    }

    addCustomer()
    {
        if (this.CurrentCustomer)
        {
            this.Customers.push(this.CurrentCustomer);
            this.CurrentCustomer = new Customer();
        }
    }
}

In the above code we use Import to import the Customer class into our App class so that we can use it in our code. Then we create an array of Customers (which will hold a list of customers) and a specific Customer (which the user will add using the UI). The “addCustomer” method adds the current customer to the list. To put sense into all of this let’s create a UI front end which has a textbox, and button called “Add Customer” which adds the customer whose name you type in the textbox to a List of customers which is re-presented by a “UL”. The final view (index.html) looks like this:


Notice that in the code above I have a form whose submit triggers the addCustomer method which we write in our View-Model. There is a textbox which we “bind” to the CustomerName of the Current Customer, which again is defined in our view model. We have a simple submit button and a UL where the LI’s repeat for every Customer in the “Customers” array which is defined in our ViewModel. The UI looks like this:

As we type the name of the customer and click the add button the customers get added to the list:

And we can do this with multiple customers:

The binding of the textbox with the CurrentCustomer.CustomerName ensures that the value passes from the view to the view-model. Each Time addCustomer is called, we create a new Customer object and hence the textbox blanks out after the existing customer is added to “Customers” array which is bound to the UL using a “repeat.for” loop.

So far so good. Everything we’ve done thus far, compiles, builds and runs.

However as you start going into deeper Aurelia, you will realize that you will need to use more complex concepts like Dependency injection (where you would like to inject services into your view-models). The start project we have created works but isn’t fully ready to handle imports because we have the typing files missing. Remember we said we’ll discuss typings later in this post? This is the part where we now need to address typings to move ahead.

To virtually use any advanced feature in aurelia you will have to import the aurelia framework in your code. For example if you want to use the dependency injection of aurelia, your code to do so would look like this:

import { inject } from 'aurelia-framework';

Put that code in your app.ts and you’ll immediately realize that visual studio starts complaining:

And this is because Visual Studio knows nothing about the Aurelia-framework even though we had done a “jspm install aurelia-framework” right when we started. We had done the install with the command prompt using jspm but visual studio (and typescript) still require typing files for aurelia framework before they let you import specific components of the framework inside your TS files.  The simplest way to grab Typing files is to add a “typings.json” file in your project root and add the following lines to it:

{
  "name": "AureliaHelloWorld",
  "dependencies": {
    "aurelia-binding": "github:aurelia/binding",
    "aurelia-bootstrapper": "github:aurelia/bootstrapper",
    "aurelia-dependency-injection": "github:aurelia/dependency-injection",
    "aurelia-event-aggregator": "github:aurelia/event-aggregator",
    "aurelia-fetch-client": "github:aurelia/fetch-client",
    "aurelia-framework": "github:aurelia/framework",
    "aurelia-history": "github:aurelia/history",
    "aurelia-history-browser": "github:aurelia/history-browser",
    "aurelia-loader": "github:aurelia/loader",
    "aurelia-logging": "github:aurelia/logging",
    "aurelia-logging-console": "github:aurelia/logging-console",
    "aurelia-metadata": "github:aurelia/metadata",
    "aurelia-pal": "github:aurelia/pal",
    "aurelia-pal-browser": "github:aurelia/pal-browser",
    "aurelia-path": "github:aurelia/path",
    "aurelia-polyfills": "github:aurelia/polyfills",
    "aurelia-route-recognizer": "github:aurelia/route-recognizer",
    "aurelia-router": "github:aurelia/router",
    "aurelia-task-queue": "github:aurelia/task-queue",
    "aurelia-templating": "github:aurelia/templating",
    "aurelia-templating-binding": "github:aurelia/templating-binding",
    "aurelia-templating-resources": "github:aurelia/templating-resources",
    "aurelia-templating-router": "github:aurelia/templating-router"
  },
  "globalDevDependencies": {
    "angular-protractor": "registry:dt/angular-protractor#1.5.0+20160425143459",
    "aurelia-protractor": "github:aurelia/typings/dist/aurelia-protractor.d.ts",
    "jasmine": "registry:dt/jasmine#2.2.0+20160505161446",
    "selenium-webdriver": "registry:dt/selenium-webdriver#2.44.0+20160317120654"
  },
  "globalDependencies": {
    "url":
"github:aurelia/fetch-client/doc/url.d.ts#bbe0777ef710d889a05759a65fa2c9c3865fc618",
    "whatwg-fetch": "registry:dt/whatwg-fetch#0.0.0+20160524142046"
  }
}

This will provide details of practically all aurelia typing files we are going to need for now and future. Once you have created this file and saved it go to command prompt, navigate to the folder that has the typings.json file (i.e. same folder as the one which holds your web.config) and type:

npm install typings –g

This will install the typings module globally. Now to grab relevant typing files based on your typings.json type:

typings install

Now we’ve fetched typing files but Visual Studio is still blissfully unaware about the fact that we’ve pulled the typings. You should also see the “typings” folder in your source explorer. However To make Visual Studio aware of the typings, we will need to add a typing definition file inside our source folder – the one which our Transpiler is watching. We can call this file anything as long as it has a “d.ts” extension but for now we’ll call “main.d.ts” and will place it inside the src folder. If you notice inside the typings folder you realize that is already has a typing definition file called “index.d.ts” – and it references all the necessary aurelia files;  so if our “main.d.ts” just references that file we should be done. Let’s go to our newly created blank “main.d.ts” (inside the src folder) and add this line:

///

With this done we now have the typings referenced properly and visual studio should stop throwing “cannot find module ‘aurelia-framework’.” error and that specific error should get fixed. However now when you fire a build you should see dozens of these two errors:

Build:Cannot find name 'Promise'.
Build:Cannot find name 'Map'.

This is because Aurelia typings internally use promise and collections. To fix this error we can use the Nuget package manager inside visual studio and install Typescript definition for ES6 promise and collections. The command to do that (inside Visual Studio Nu-get package manager) is:

Install-Package es6-promise.TypeScript.DefinitelyTyped

Install-Package es6-collections.TypeScript.DefinitelyTyped

Once you do that and once typings for promise and collections are installed your build should compile successfully. However, if you start using advanced features like dependency injection you will encounter some more build errors. For example, let’s modify your “app.ts” to use dependency injection:

import { Customer } from './Customer';
import { inject } from 'aurelia-framework';

@inject(Customer)
export class App
{
    CurrentCustomer: Customer;
    Customers = new Array();
  
    constructor(injectedcustomer)
    {
           
    }

    addCustomer()
    {
        if (this.CurrentCustomer)
        {
            this.Customers.push(this.CurrentCustomer);
            this.CurrentCustomer = new Customer();
        }
    }
}

Notice the lines in bold which are using out of the box dependency injection of aurelia. In other words, aurelia automatically creates an object of Customer class and pushes it in the constructor. However the moment you actually do this and hit a build you should see compilation error:

Build:Experimental support for decorators is a feature that is subject
to change in a future release.

Set the 'experimentalDecorators' option to remove this warning.

To Overcome this error you will need add a new tsconfig.json in your project root and add the following lines to it:

{
  "compilerOptions": {
    "noImplicitAny": false,
    "noEmitOnError": true,
    "removeComments": false,
    "sourceMap": true,
    "target": "es5",
    "experimentalDecorators": true
  },
  "exclude": [
    "node_modules","jspm_packages"

  ]
}

The experimentalDecorators value of true ensures decorators like inject are allowed. Exclude on node_modules and jspm_packages ensures that the typescript compiler excludes those when it fires a build. Fire a build now and your build should fire successfully.  Run your code and it should just as before because we aren’t doing anything in particular with dependency injection here. In fact it’s actually a bad example of dependency injection but I included it in this post because the post covers the setup of a starter project that let’s you try out and learn everything that aurelia has to offer while using it with Typescript, inside Visual Studio 2015, so adding a right “tsconfig.json” and getting the typings upfront is a good idea (even if you are not using dependency injection or other advanced aurelia concepts).

I honestly believe that while the aurelia team is doing an amazing job documentation and videos of aurelia itself, but mixing aurelia with type-script and getting it all to run on visual studio 2015 can turn out to be a bit daunting for someone who is starting out his aurelia + typescript journey because there is no single place to get started. It would be really nice to not have to go through so many steps to just set up a basic development project where you can try out and learn features aurelia (with Typescript) has to offer, while working inside visual studio.

I know you can create projects using Aurelia CLI tools but even those had the similar typings related issues that I highlighted in this post. And getting those to work was an equally daunting task. Now that I have been working in aurelia for a few days, I can take a skeleton and make that work too, but as far as I am concerned, the learning curve to get into aurelia itself has been much lower than the learning curve required to get into aurelia with typescript and get it all to work inside visual studio. All I can do is hope that the Aurelia team builds some more documentation around getting started with Aurelia + Typescript. In the meantime, this post you get you on your way.


Comment Section

Comments are closed.


Posted On: Sunday, 23 October 2016 by Rajiv Popat

In 2016 I took up the 52 book challenge as a Marathon for my mind. My idea was to read a book a week and write about my progress on this blog. 2016 has been one busy year and I may not have been able to live up to my original expectations but I’ve been lucky enough to have taken time out during my commutes and weekends to sit and enjoy the company of a book every now and then.

I’ve written a review on seven of the books that I’ve read and really like this year but obviously I’ve been reading more. Here is a comprehensive list of all books I’ve finished cover to cover this year (The First Seven are in older posts, which is why this list starts from 8):

 

  1. The Practicing Mind: An averagely good book that talks about the power and benefits of focused practice. What I liked about the book was it’s basic message but the book lacks story telling and also lacks sufficient science leaving it in the murky territory of a typical self help book. Not that there is anything wrong with typical self help books, but not something I enjoy reading. I’d give it a 3 / 5.
  2. You are your own gym: Bodyweight training was how I started my fitness journey. As a nerd, any book that is focused on body weight training attracts me even today. There is something liberating about not having to depend on gyms, weights and machines to remain healthy. Being able to workout in a hotel room when you are travelling and not using travel as a reason to skip your workout is very empowering too. Written by an author who has trained special forces to be healthier and functional on field, the book has not just workouts but ideas and concepts every person who is into healthy life-style should read and understand. More than just a book, it’s a program that also has an accompanying app that you can download free and then buy if you like. I’d give this program (book + app) a 4 / 5 but don’t expect it to give you any magical gains, unless you have the consistency to stick to the program.
  3. The Tao of Pooh: I’ve always wanted to read philosophy outside the Indian and western philosophies and if there is one book that does an amazing job of introducing you to Taoism it is this book. It’s funny, it’s witty and it’s deep. One of the few books that prove that your writing doesn’t have to be complex to be deeply philosophical. And of course, who can not love Pooh? :) – A 4/5!
  4. Go for No – This is one of the few books this year that did not impress me. The central premise is fine – As a sales person, if you aim for yes’es in your sales call you are at an disadvantage. Every no you hear puts you down and demotivates you. But if you ‘Go for No’ and start your day by aiming for X number of NO’s every day, then you are motivated to keep calling even when you hear a no, because you are moving ahead towards your goal of X no’s. Every yes is a pleasant surprise and pushes you even harder. There are pages and pages of weird time travel based fiction where the protagonist falls on a golf course, hurts himself in the head, time travels and meets his own future self who teaches him the lesson of going for no’s. The book is a little too strange for my taste and what put me off about the book is that the only teacher the protagonist finds even in his own fictional world, is his very own hypothetical future self. The central premise is good but what could have been conveyed in one paragraph or a small readable blog post was stretched to an agonizingly long book with meaningless fluff and crazy fiction which was completely not required (and maybe even detrimental) for spreading the actual message. A 2/5.
  5. Naked Statistics: I haven’t read a book on Math since I finished my college and when I stumbled into this one, I saw it as a good opportunity to brush up on a subject that I never fully grasped in school. The book just blew me away in the way it describes some of the concepts of statistics that I studied in school but never really understood. From the difference between mean, median and mode, to topics like co-relation, the book covers each topic from a real life practical standpoint. It then touches the math side of each topic where the author takes very real simple examples to explain the complexities. The goal is to develop an intuitive understanding of some of the most used mathematical and statistical concepts and the book does indeed do an amazing job at it. A 4/5.
  6. Deep Work: Cal Newport as an author gained my attention with his book ‘So Good They Can’t Ignore You’ where he argued that passion is overrated. In this book he talks about the power and the importance of focused deep work in any creative person’s life. The book cites real examples of folks who go really far to cut themselves off distractions. The book is inspirational and the writing style is pretty good but there are not a lot of real unique ideas in this book that stick with you for life. A fascinating read though to understand how fragmented and distracted our lives today have really become. I’d give it a 4 / 5.
  7. Nudge: Written in the same tone of Switch(which I reviewed earlier) the book covers the topic of choice architecture in much more theory with many more examples. Though a lot of material in this book is very similar to  Switch, it covers a vast variety of topics going from designing toilets that encourage cleanliness to Libertarian paternalism in the world of nutrition. The tone of the book is very research oriented and factual with very little or virtually no self help advice which is what I love about the book. I would give this a 4 / 5. 
  8. Focus: The Hidden Driver for Excellence – From how a house detective scans a huge store full of people, focuses on small tell-tell signs and picks shoplifters; to the science of willpower and the understanding of the famous 10000 hour rule popularized by Malcolm Gladwell, the book covers a lot of ground but provides very little real life tools or frameworks to increase your attention. A good read especially if you enjoy books on Psychology but again, if you’ve read a few books on neuroscience and psychology you will not find a lot of new ideas in this book. I rate this a 3 / 5.  
  9. Eat and Run: After reading born to run when I came across Scott Jurek’s (who was one of the primary real life characters of Born to Run) personal biography, I needed no special nudes to pick it up. Scott is indeed as good a writer / storyteller as he is a runner. A intricately woven collection of experiences resulting out of the  races he has run, experiences he has been through and the life he has lived make the book a fascinating read. Scott’s own character, his vegan diet and his outlook on life is an icing on the cake. Again, a must read for anyone who wants to experience real-life story telling that is much more fascinating than fiction.  A 5/5.
  10. One Up on Wall Street: Maybe it’s my experience with banking software or the fact that I come from a family that has done business for five generations, I’ve always invested a part of my salary in long term stable investments and have been fairly lucky with these investments. When I saw this book by Peter Lynch,  a respected fund manager, I seized the opportunity of learning from a master investor. What’s amazing about this book is that it covers the basic technicalities of investing (things like the PE ratio, value investing etc.) and then jumps into the art of investing. Do you invest in companies which have rich amazing offices or do you buy stocks of companies where a bunch of hard working folks are working in a cramped office in a non-expensive corner of the city? Do you keep your eyes open when you see a real life consumer product in your local supermarket and place your bets by investing on the company before the financial experts of the world start giving positive reviews to the stocks of that company? In a predatory world of bulls, bears, computerized flash trading and deception Peter provides hope to the individual investor and shows them how they can use their local insights to get a one up on wall-street. A 4/5. 
  11. Sleep Smarter: As a nerd who lives an extremely irregular life as far as sleeping habits are concerned, when I bumped into Sleep Smarter I knew this was my chance to bring about some serious change into my life. The book was a collection articles which describe how big an issue sleep deprivation is and then moves into some real pragmatic tips that you can use today to improve your sleep. For example, avoid screen time at-least a few hours before sleep and if you must, install a special app that dims out light on your phone. How temperature effects your sleep, how the clothes you wear have an impact on your sleep and above all, how to make real lasting changes to your sleeping habits. This is not a book that everyone might appreciate but given my love hate relationship with sleep and the fact that I am a night owl who is also fascinated by the idea of waking up before the sun rises, the book was an extremely good read for me. I would give this 4/5.  
  12. Total Money Makeover: Dave Ramsey is one of the few guys in the world of personal finance whose advice I’ve incorporated in my own life and have benefitted HUGELY when it comes to my finances. I read this book early this year (in fact it was one of the first few books I read this year) and the book had one of the deepest influences on my financial life than any other book. I’ve adapted some of Dave’s ideas described in this book and have increased my savings 3X and am really close to becoming completely debt free. The book is an awesome read for everyone who has any debt, doesn’t keep a written monthly budget and doesn’t record every single transaction of his / her life. From busting popular myths about home loans, student loans, credit cards and car loans, Dave takes a slightly confrontational tone in this book but the tone serves it’s purpose of shaking you out of your dream world and brings you face to face with the dark reality of any debt and how that cripples your financial life. I’d give this book a 5 / 5 and the respect it deserves. If you have respect for money you should read this book. If you are a nerd who struggles with managing money, you have to read this book.   
  13. Originals: This book by far is one of the most ‘original’ books I’ve read on creativity. The book just demolishes every single conventional wisdom on creativity. Entrepreneurs are bold; WRONG. Entrepreneurs take chances and risks; WRONG. Most entrepreneurs are sure of their ideas and their vision: WRONG. Most startups are formed by people who are in their early thirties: WRONG. You need to quit your job if you want to form a startup and have conviction and belief in your idea: WRONG. Every single idea or non-scientific random self-help wisdom on creativity and entrepreneurship that you hear about so much these days will be shredded into tiny threads and blown off after you read this book. The book personally gives me validation that I’ve been seeking for a very long time but haven’t found anywhere else in the world. After reading this book all I can do is hope that more and more authors do responsible research like Adam Grant did for this book rather than parroting stupid self help catch words like – ‘passion’, ‘take the plunge’ and the ridiculous – ‘you can do it!’. No wonder this book has an almost 5 star rating on Amazon and is a breath of fresh air in all the books of business and creativity I’ve read thus far. I would rate it a big fat 5 / 5 and if you were going to read one book on creativity or if you were thinking of doing anything creative with your life, I would advice you to grab a copy of this book and read it cover to cover.   
  14. Mini Habits: Having difficulties working out? Why not start with just 1 push up a day? Sounds ridiculous? Try it and before you know it, you’ll be doing 100 a day in a couple of months. And if you don’t, well you can still do one and maintain your habit. The idea is ridiculous. In fact it’s so ridiculous it actually works! The premise behind this book? Our brain often makes an unpleasant activity which is good for us (like working out, eating vegetables) seem so difficult that we don’t even start. But what if we tricked our brain by saying, I’m just going to do one push-up or eat a tiny 1/2 inch slice of broccoli every day? There is very little resistance from your brain because the habit is so tiny. Once you start your brain sees it wasn’t that bad and naturally does more.   If it doesn’t; don’t push it – just do a pushup and you’re done. Which gives your brain no reason to resist. If you do more, you feel happy and encouraged. If you don’t you are not traumatized by the guilt of not living up to self-commitments and can still feel proud about finishing your commitment. It’s a small book, very to the point and a very interesting and practical way of hacking your own brain – which by the way, is a topic very near and dear to my heart. Definitely a 4 / 5.

I’ve obviously not been reading as much as I wanted to and probably will not make it to book 53 by the end of the year, but the challenge has indeed opened me up to new books, new ideas and introduced me to  topics I always wanted to read about. Net-Net it’s been fun so far and I hope I can read a few more interesting books before the end of the year.


Comment Section

Comments are closed.