Toggl’s “Building a Horse with Programming” comic explained

If you don’t know what this is about, you should first go and check it out.

So, C++ is, for me at least, an intolerable programming language. Everything from the code to the resultant program tends to be as ugly as it gets. More than half the time it feels like it was hacked-together to include everything. However, it has been around for a very long time and you can use it to do just about anything that you could want to do with a programming language. Only, the experience and the result may not exactly be great.

Then we have Java. The main problem with Java is that for a language that aims to be useful for all sorts of applications on all platforms, it’s missing a lot of features that are commonly used by full-stack developers. Very often, when writing programs in Java, programmers end up spending more time than they need to creating new types and methods so that they can use them in the program that they actually set out to write.

Ever since npm and nodejs gained popularity, JavaScript has become one of the world’s most widely used languages and npm is probably the most used package manager of them all. And the thing about Javascript programmers is that they use a lot of external libraries and packages and every once in a while, they add their own package to the global repository just to get a kick out of it. By now there are probably more packages in npm than there are libraries for any other language out there and when you are a JavaScript developer, you really have use them if you want to get any work done. The pinked “Backbone” and “Angular” are references to Backbone.js and Angular.js, two popular JS frameworks.

NoSQL refers to database systems that don’t use SQL and are non-relational. An example is MongoDB, which stores JSON objects grouped into collections. The joke is that the non-relational model doesn’t always expose enough information for you to be able to access your objects without using the abstract api.

COBOL? Well, I guess no one entirely gets it except for its creator.

Lisp has a lot of parentheses. Just google sample codes for lisp and you’d know.

C# is a fairly complete and tolerable language. The problem? Microsoft. Windows. The thing about the costume is that C# is basically Java in a costume (it closely resembles Java in terms of syntax and semantics.) The camel is basically the windows environment. Basically, it means that C# programs don’t always work as intended if not running on Windows.

Assembly doesn’t really offer a lot of language features. There’s a basic set of operations that have to be used for everything. But coding something in such a low level language means you get control over aspects that most languages abstract over. So you can make a really efficient program, hence, the thing about running.

Everyone hates PHP. They say it’s a terrible and unsafe language and has unpredictable behavior. I personally have reasons to really like PHP but I will still go ahead and tell you that most good posts on the subject target aspects of the language that really do suck so yeah some of the hate is justified.

 

Angular vs React vs Vue.JS

This is the shortest and most effective comparison that aims to make the decision making process easier.

Angular: 

Pros:

  • The oldest and therefore very mature.
  • A complete framework in itself and ideal for large projects.
  • The MEAN stack remains, to date, one of the most popular stacks for web-development so finding jobs is never tough for Angular developers.
  • With NativeScript, you can use it to develop smartphone apps.

Cons:

  • If you don’t know, don’t like or don’t want to learn TypeScript, I would suggest staying away from it because the current releases are TypeScript based.
  • Packed with features, it’s a huge framework and therefore takes a fair amount of time to learn and master.

React:

Pros:

  • Been around for a fair amount of time and therefore can be said to be just as mature as Angular.
  • Smaller than Angular and therefore easier to learn.
  • Just as popular, if not more, and therefore has a well-developed community.
  • Just as many jobs out there for it as for Angular.
  • React Native allows for it to be used to make Android apps.

Cons:

  • You will have to learn JSX. Basically the idea is that in ReactJS, you try to maximize your use of JS and minimize your use of HTML. JSX kinda merges both. Obviously this adds to the learning curve.
  • Not a complete framework. React Developers almost always use Redux or some other flux implementation for state management. Code often becomes a real mess with large projects.

Vue.JS:

Pros:

  • Has just as many features as React, if not more.
  • Small enough for you to be able to learn it in a single day.
  • Great for small projects.
  • Gaining popularity real quick.
  • Resembles the original Angular.JS so if you are familiar with it, you’d love Vue.
  • Unlike React, with Vue, the goal is to maximize the use of HTML and minimize the use of JS.
  • Although not a complete framework, it comes with it’s own router and flux implementation, that, although independent projects, offer seamless integration with Vue itself.

Cons:

  • Relatively new and therefore has a small community and fewer jobs are available.
  • NativeScript has a plugin for Vue support but it not supposed to be ready for development yet. Although, it seemed pretty functional when I tried it.
 

Die Verwandlung

I just finished reading Franz Kafka’s “Die Verwandlung,” translated into “The Metamorphosis” by Stanley Corngold. Before I even started reading it, I’d told my classmate about how it seemed like a weird one — something I’d deduced from its synopsis — and he’d responded with “Franz Kafka’s works tend to be.”

Now that I’m done with it, I’m not quite sure how I feel about it. For one, it was incredibly short. Like I knew it was a short story but I’d still been expecting it to be longer. Then there comes the fact that (SPOILER ALERT) 
… 

 

Linus on C++

C++ is a horrible language. It's made more horrible by the fact that a lot 
of substandard programmers use it...
In other words, the only way to do good, efficient, and system-level and 
portable C++ ends up to limit yourself to all the things that are 
basically available in C.
In general, I'd say that anybody who designs his kernel modules for C++ is 
either 
 (a) looking for problems
 (b) a C++ bigot that can't see what he is writing is really just C anyway
 (c) was given an assignment in CS class to do so.

Feel free to make up (d).

You can read the emails here.

 

How GoDaddy robbed me.

Humble request to all readers: Share this post.

I really want to keep this as short as I can and address only the primary issue. For those who don’t know, GoDaddy is one of the most popular domain registrar’s in the market and I started using it a couple of years back as a means for getting rid of my local registrar who had been screwing up routinely in a most inefficient way.

Abstract: Lately, they have not only made a few irrational and immoral decisions, but they also robbed me of a rather large sum of money and then messed up my order completely. In short, I paid thrice for the same order and I didn’t even get what I ordered. Furthermore, somehow the same domain was purchased twice using my account and I have absolutely no idea how it’s even possible. … 

 

Did Nokia really not do anything wrong? They did.

“We didn’t do anything wrong but somehow we still lost.”

Those were the words of the CEO. Do I agree with them? No.

Let’s go back to, say, 2006. Every average person had a Nokia. The Motorollas? Those were what people bought between two successive Nokias. Sony Errison? Well, that one had its own cult. There was a wide variety of different lines of phones in the market, each aimed at a different class of users. The basic featured for those looking for a cheap calling device, the multimedia enabled for those that wanted more, the communicators for those that afforded them.

That wasn’t all. That was the time when Nokia did some strange experiments resulting in the production of some really weird and unique phones. And guess what? A large percentage of those took off as well. Examples of such models could be the NGage and the Ngage QD – Gamepad shaped devices aimed at gaming. I happen to have owned both models. Nokia was also infamous for coming up with some really weird designs, which, surprisingly, sold just as well.

Why? Cause Nokia owned the market. They were among the pioneers and they had almost monopolized the mobile market. What they produced was good and was pretty regardless of how shitty it might actually be.

All the awesome devices that Nokia ever made were … 

 

Kaspersky OS

First, it’s based on microkernel architecture, which allows to assemble ‘from blocks’ different modifications of the operating system depending on a customer’s specific requirements.

Second, there’s its built-in security system, which controls the behavior of applications and the OS’s modules. In order to hack this platform a cyber-baddie would need to break the digital signature, which – any time before the introduction of quantum computers – would be exorbitantly expensive.

Third, everything has been built from scratch. Anticipating your questions: not even the slightest smell of Linux. All the popular operating systems aren’t designed with security in mind, so it’s simpler and safer to start from the ground up and do everything correctly. Which is just what we did.

Let’s talk about this. Micro-kernel design? Interesting, but MINIX has had that for ages now. Linux vs MINIX = Monolithic vs Microkernel = Performance vs Security. Yes, going for one kernel design instead of the other does equal compromising one aspect for the other. In short, this decision to use the micro-kernel isn’t honestly innovative.

Built-in security system? Oh wow.. Sure, whatever. Give us more details and then we will consider it’s existence and efficiency.

Everything has been built from scratch? I admire your effort, but at the end of the day, it is going to have to be POSIX compatible. It’s hard to say whether or not it really was worth the effort. And I hate to break this to you, but it would have saved time, and made more sense, to proofread  the code instead of rewriting it.

In short: As of now, it offers nothing too interesting. Sure, I’d like to download an image and give it a go but that’d probably be it.

 

The Virtual Reality I want.

There’s something of a silent war going on around us at this time, and it has been going on for quite a while. I know that I wrote “Virtual Reality” in the title, but that’s merely due to the fact that it’s the generally preferred term for all of those projects out there making headsets and goggles, but otherwise this post does cover my ideas about its brothers that go by the names “Mixed Reality” and “Augmented Reality.”

So, before we go on, let’s talk about how the brothers differ. Virtual Reality is where Oculus is the major player in the market, and has met fine success. The HTC Vive is another which may not have stirred as much excitement but so far, all things positive have been said about it. The idea of virtual reality, in terms your grandma could understand, is that you put on a headset and you find yourself in a different world altogether. You look around, and all you get to see is what the headset shows you, while you are completely distracted from what’s actually around you. Rather like the Nygmatech in Batman: Forever. You put it on, and the next thing you know, you are in a forest; or perhaps in the middle of the French Revolution? … 

 

Minimalism and security.

Minimalism helps. It always does. It’s clean, cool, beautiful and relaxing. Oh and it allows for security in software. Every single element in an application, every single feature, every program in an operating system could open doors for attackers to get in through.

The recently discovered Mac malware Eleanor, which opens a backdoor, works by exploiting a vulnerability in the MacUpdate application.

iPhone jail-breaking applications, not that I have anything against them, make use of similar vulnerabilities. The original JailbreakMe exploited a vulnerability in Safari in iOS 1.1.1, while the second version used a vulnerability in the PDF reader.

I do realize that it looks like I am suggesting that Safari or PDF readers or updating apps should not exist, but what I am actually suggesting is that the more an app grows, the greater the chances for an attacker to get in. We can always, at the very least, keep stuff simple. For example, smartphones could have less pre-installed bloatware? Samsung could stop shipping their devices with apps like “Papergarden” or “Flipboard” or “Samsung Apps”  installed by default?

 

Contextmenus.js

Purely Javascript based solution allowing for easy creation of right-click context menus. Browse the code on GitHub. Demo

So, Haider posted on his Facebook timeline, a link to his then newly setup github repo which he had named “rightclick.js.” It was pretty clear what it was about so I gave his code a look. He is using JQuery, and (for some reason unclear to me,) NodeJS.

This morning, I decided to make my own in pure Javascript. I started around afternoon and got done with it a couple of hours ago. I wanted to call it contextmenu.js but there already exists a script by that name, and thus, out of respect, I renamed mine to contextmenus.js. The code happens to be a couple of files that together take up a total of 1812 Bytes of disk space. Everything  that you need to know, in order to get it to work for you, is explained in the README.md on the GitHub page.

 

From Windows 8 to 10 – The excitements and the disappointments.

tl;dr

I have been a Linux user for the past few years, but I grew up using Windows, and I have always been closely interested in it’s progress and moves.

When Windows 8 came out, I was like the only person I knew who didn’t hate the Metro. All my friends thought it was ridiculous, and truth be told, it was. It seemed as if they had forgotten that people neither have a bunch of huge touchscreens lying around in their place nor do they love the desktop experience in full touch, and Windows 8 was a weird cross between an OS optimized for touchscreen, and an OS that didn’t look like it would ever work well with touchscreens.

Accessing the desktop by clicking on a tile at the bottom left corner of the screen was oddly disturbing.. it felt like the desktop had lost it’s old integrity.. Like it was only a tile among many, like it was just another app like the ones accessed by clicking the other tiles. Furthermore, at times, it was hard to decide which world to live in: the metro, that had a really long way to go, and was far from mature, or the desktop that we’d both loved and hated for ages. For Developers, it both sucked and was an opportunity at the same time. They had a new platform to master; some would go on to proudly declare themselves to be of the first 100 developers for Windows apps. Some developers saw it as mere clutter. Another language and platform to come across and not-read articles of.

The question was: “Why?”

… 

 

Fixing the brightness issue on Ubuntu 16.04

  • The issue: Random flickering when changing the brightness using the function key, while the change wasn’t steady. The slider in system settings allowed me to change the brightness normally.
  • The machine: Dell Inspiron N5110

The first solution I tried was creating the /usr/share/X11/xorg.conf.d/20-intel.conf file with the following lines:
Section "Device"
Identifier "card0"
Driver "intel"
Option "Backlight" intel_backlight"
BusID "PCI:0:2:0"
EndSection

This didn’t change anything. So I tried following “dushnabe’s” suggestion on this thread. Which too didn’t make any difference really. The problem, as I saw it was that I appeared to be using both intel_backlight and acpi_video0. Both use different ranges of values to change the brightness. Hence the flickering. It became clear that I had to force the usage of just one, and that’s exactly what the fix in that answer was supposed to do. Except that for some reason it wasn’t working.

After googling further on this, I landed on this page and I saw the list of kernel parameters that had to do with the backlight. I rebooted a couple of times, each time trying a different parameter, and finally,
acpi_backlight=native is what did the trick. I noticed that it doesn’t allow me to change brightness on login screen, but after login, there was no flickering, and when I ran ls /sys/class/backlight/, I saw that it no longer returned acpi_video0. The only issue I have right now is that there is no fixed minimum. Sometimes, it decreases to a reasonable minimum, while at other times, it results in a blackout, and I have to manually adjust it using the slider in system settings or using xbrightness..

To replicate this process, all you need to do:

  • Fire  up a terminal
  • sudo nano /etc/default/grub
  • At the very end of the string GRUB_CMDLINE_LINUX_DEFAULT, (which in my case was “quiet splash,”) add acpi_backlight=native.
    The final string, in my case, looks like “quiet splash acpi_backlight=native
  • Close and save the file, and run sudo update-grub and then reboot.

In the event that this doesn’t work, it’d be worth your time to try out the rest of the kernel parameters. You don’t have to modify the grub file every time. Instead you can choose to modify kernel parameters before boot. This you can do by pressing “c” on the grub screen and typing the desired parameter, in the correct place, right after “splash.”

 

Hands on with Ubuntu 16.04 “Xenial Xerus”

Being one of those idiots who started downloading the ISO way before the link was even officially added to the download page, I do have a couple of reasons to regret doing so. I was on a slightly messed up 14.04 that appeared to have deteriorated over time, and I had been considering a reinstall, but had been putting it off because I had decided to wait until after the release of Xenial.

So, fast forwarding to when I was done installing it. As per habit, the moment it was installed I fired up a terminal and at the same time opened Firefox.. The first thing I noticed was that the terminal had a green font on the “[email protected]:~$,” and then I ran an apt-get update, which obviously was stupid as it had just been released, and a while before or after it I also noticed that the terminal seemed to insist that I use “apt” in commands in place of “apt-get.” I don’t honestly know what inspired this change, but just another minor.

Two changes that we had been hearing about since way before the release were: … 

 

Using two routers to extend a network – Part 2

The goal: Create two separate networks, each with its own router. Both routers will have different security and SSID, while the WAN settings of A are configured to connect to the internet while B, being a subnetwork of the first, will connect to the internet through it.

Now the thing is that the LAN and WAN IP addresses can not be in the same subnet, so here’s what I did. I changed the subnet of A from 255.255.255.0 to 255.255.0.0 .. Also, I changed to IP Adress to 192.168.1.1. That’s all the config you need to do in Router A, assuming it is already configured to connect to the internet.

Now get an Ethernet cable and plug one end of it into any of the LAN ports (some reccomend the first) in A, and the other end into the WAN port of B. Login to it’s portal.. yeah it’s at 192.168.0.1. Though I don’t see why dynamic shouldn’t  work, but since it didn’t for me, let’s assume it won’t work for anyone else. Select Static IP in the startup wizard and you’d be greeted by a number of blank input-boxes.  Fill them in as follows:

IP Address: 192.168.1.2

Subnet Mask: 255.255.255.0

Gateway: 192.168.1.1

Primary DNS Server: 192.168.1.1

That ought to do the trick. You might want to do a reboot, but that’s not always necessary.

 

Using two routers to extend a network – Part 1

Umm, yeah, so let’s get to it. What was the first interpretation? oh that’s right, Router B to act as a wireless access point for A.

So, A has an internet connection and B has to be connected to it via a cable and configured in such a manner that the connected devices automatically connect to either of the two devices with the best signal as you move about, and as B is acting as an access points, all data B receives and sends would of course need to be sent to and received from A. (Pardon me if something I’ve written doesn’t seem correct, I’m merely a noob and explaining in terms your grandma could understand.)

This was actually pretty simple, so I’d just list the steps leaving out the screenshots.

  1. Get an ethernet cable and insert one end of it into any LAN port on A, and the other end into the first LAN port of B. (actually I’m not sure if it has to be the first port or not.)
  2. Login to the web interface of B and set the SSID, i.e the name of the network, and the security settings of B to be the same as those of A. e.g. if A is called “narlges” and it’s using WPA, with passphrase “flutterwacken”, then you need to apply the same settings on B.
  3. Making sure that both A and B are in the same subnet, change the LAN IP adress of B to something other than that of A. So if the IP of A is 192.168.0.1, then you can set B to 192.168.0.X. Basically X can be any number between 0 and 255 except 1 as it is being used by A.
  4. Disable DHCP on B as it won’t be assigning IP addresses and all.
  5. Other wireless and radio settings like channel and all need to be the same too
  6. Reboot both routers?

And basically that’s it.

 

Using two routers to extend a network

I have recently been faced with this challenge, partly for learning, as it’s kind of an enthusiast thing and partly because I might actually need to to that in the near future. Since the title might seem a bit vague or ambigous to some, let me first make a bit clear exactly what it is I’m after. How about we start by listing interpretations? (My goal and the whole point of all this can be seen to later.)

Router A= TL-WR841N, and this one’s configured to connect to the internet using PPTP

Router B = Tenda W268R,

  1. I have two routers, and I want B to act as a wireless access point to extend it’s range.
  2. I have two routers, and I want B to have a LAN of it’s own, with A as a gateway providing access to the internet.
  3. I want to do either of the things listed above over a wireless bridge.

Let me say this much. I am a newbie. I’m not much of a networking guy, nor do I really know how this is going to work. I’m simply Google-ing and experimenting.

In the next few posts, I will explain what I have tried and what was the outcome.