The title might be a little too harsh, but I assure you, I’m not the only person who feels that way. I read the script and sometimes I wish I hadn’t. While there are some revelations in it that I like, there are other parts that I just can’t accept are canon. …
There’s something of a silent war going on around us at this time, and it has been going on for quite a while. I know that I wrote “Virtual Reality” in the title, but that’s merely due to the fact that it’s the generally preferred term for all of those projects out there making headsets and goggles, but otherwise this post does cover my ideas about its brothers that go by the names “Mixed Reality” and “Augmented Reality.”
So, before we go on, let’s talk about how the brothers differ. Virtual Reality is where Oculus is the major player in the market, and has met fine success. The HTC Vive is another which may not have stirred as much excitement but so far, all things positive have been said about it. The idea of virtual reality, in terms your grandma could understand, is that you put on a headset and you find yourself in a different world altogether. You look around, and all you get to see is what the headset shows you, while you are completely distracted from what’s actually around you. Rather like the Nygmatech in Batman: Forever. You put it on, and the next thing you know, you are in a forest; or perhaps in the middle of the French Revolution? …
Ever since it’s release in 2014, Christopher Nolan’s “Interstellar”, has often been compared to Stanley Kubrick’s “2001: A Space Odyssey.”
I was fairly late at watching both. Since I missed the release of 2001 by more than 30 years because I wasn’t at all close to existing around the time it was released and for the the first few years after I had started to exist, it wasn’t entirely possible for me to watch and comprehend it.
As for Interstellar, all I have to say is that I wasn’t really watching many movies when it came out. I guess it was because my internet sucked and because I was dealing with exams so I put it off for a while, since I didn’t want to ruin it by watching it in a hurry. It wouldn’t have made much difference, but since I had heard good things about it, I wanted to be relaxed and with ample time before I set about to watch it.
My reaction to both was: What the hell?
2001: A Space Odyssey, is considered the ultimate classic and some would go as far as calling it the best one of Kubrick’s works.
(Having never watched any of his others, I can’t say much on the matter.) So it started off with a music that sounded pretty familiar and that’s all thanks to Toy Story, and after a few minutes into it I was like what the hell? That scene didn’t have to be as stretched. And my reaction was pretty similar to the one in which a woman walks along a velcro’ed path carrying a lunch tray to a sleeping guy. The scene with the apes too was unnecessarily long so I fast forwarded through it and missed the actual punchline (i.e. how they suddenly discover that a bone can be used as a melee weapon). Oh and the part at the end that is known as the “Stargate sequence” All you see for 10 minutes is landscapes with colors messed up and for what?
Other than that, the lack of a decent conclusion might make it a cool suspense for some, but for me, it makes it suck. Nothing was explained. Although the novels that were released, and the sequel that followed, and countless fan theories suggested that it was aliens leaving all those monoliths and stuff. Let’s face it though, who watched the sequel or read the novels? Not a lot of people.
Minimalism helps. It always does. It’s clean, cool, beautiful and relaxing. Oh and it allows for security in software. Every single element in an application, every single feature, every program in an operating system could open doors for attackers to get in through.
The recently discovered Mac malware Eleanor, which opens a backdoor, works by exploiting a vulnerability in the MacUpdate application.
iPhone jail-breaking applications, not that I have anything against them, make use of similar vulnerabilities. The original JailbreakMe exploited a vulnerability in Safari in iOS 1.1.1, while the second version used a vulnerability in the PDF reader.
I do realize that it looks like I am suggesting that Safari or PDF readers or updating apps should not exist, but what I am actually suggesting is that the more an app grows, the greater the chances for an attacker to get in. We can always, at the very least, keep stuff simple. For example, smartphones could have less pre-installed bloatware? Samsung could stop shipping their devices with apps like “Papergarden” or “Flipboard” or “Samsung Apps” installed by default?
You are using a computing device, be it a smartphone, a tablet, a desktop computer. It’s new, shiny, with little or no applications installed, apart from the bloatware that the manufacturer could have generously shipped with it. You fire up Facebook in a web browser, like a couple of pictures, post a status, have a small chat with a friend, and then after a while, you close the tab and lock your phone. After a while you do it again, and this time, you spend a whole hour scrolling through the news feed, and then once again you close the tab, and lock your device.
Now while it’s locked, and still connected, your device makes a decision. Assuming that you like Facebook, it adds a Facebook icon to your homescreen, or your app-drawer, for easy access to facebook.com. So the next time you unlock your iPhone, you simply tap on that icon, and it opens facebook.com in your default web browser. You love it.. It’s just a simple link, but it already feels great, and it could be better. Soon enough, after another day’s usage of the site, you notice that tapping the app icon no longer opens a browser window with facebook.com. Instead, you get a window solely running Facebook like it’s a standalone native application for your operating-system. …
37 A network of resistors, each of resistance 1 Ω, is connected as shown.
The current passing through the end resistor is 1 A. What is the potential difference (p.d.) V across the input terminals?
D: 13 V …
I have been a Linux user for the past few years, but I grew up using Windows, and I have always been closely interested in it’s progress and moves.
When Windows 8 came out, I was like the only person I knew who didn’t hate the Metro. All my friends thought it was ridiculous, and truth be told, it was. It seemed as if they had forgotten that people neither have a bunch of huge touchscreens lying around in their place nor do they love the desktop experience in full touch, and Windows 8 was a weird cross between an OS optimized for touchscreen, and an OS that didn’t look like it would ever work well with touchscreens.
Accessing the desktop by clicking on a tile at the bottom left corner of the screen was oddly disturbing.. it felt like the desktop had lost it’s old integrity.. Like it was only a tile among many, like it was just another app like the ones accessed by clicking the other tiles. Furthermore, at times, it was hard to decide which world to live in: the metro, that had a really long way to go, and was far from mature, or the desktop that we’d both loved and hated for ages. For Developers, it both sucked and was an opportunity at the same time. They had a new platform to master; some would go on to proudly declare themselves to be of the first 100 developers for Windows apps. Some developers saw it as mere clutter. Another language and platform to come across and not-read articles of.
The question was: “Why?”
Back in 1950, Turing’s paper, titled “Computer machinery and Intelligence,” was published in journal called “Mind,” and it was one of the things that can be credited for changing the way people thought about machines. Some readers were awestruck, while others only saw gibberish.
The paper, in a fair-amount of detail, spoke of computers, and the possibility of them being indistinguishable from a human in the future. The present day, may or may not be the future in question, but we have most definitely made a fine dent. Turing spoke of storage, and memory and processing and instructions, and of word, in the second part of his article titled “Digital Computers.” The model of computing defined in his article is what we know today as the Turing Machine.
The part on digital computers was preceded by “The Imitation Game.”
You might be familiar with the 2014 movie of the same name, starring Benedict Cumberbatch as a young Alan Turing who builds an intelligent-machine so as to be able to decrypt the messages encrypted by the german Enigma machine. The Imitation Game, which is defined finely in his article, is what could be used as a Turing Test, so as to determine how close a machine is to imitating the behavior and thinking capability of a human being, and whether or not it could possibly hoodwink a human into mistaking itself for a human. The Turing Test is a popular topic for discussion among enthusiasts, and developers perform different forms of it on their AI creations to this day.
I could go on for a while, but there’s honestly no point to it, and your time could be better spent reading the original article.
- The issue: Random flickering when changing the brightness using the function key, while the change wasn’t steady. The slider in system settings allowed me to change the brightness normally.
- The machine: Dell Inspiron N5110
The first solution I tried was creating the /usr/share/X11/xorg.conf.d/20-intel.conf file with the following lines:
Option "Backlight" intel_backlight"
This didn’t change anything. So I tried following “dushnabe’s” suggestion on this thread. Which too didn’t make any difference really. The problem, as I saw it was that I appeared to be using both intel_backlight and acpi_video0. Both use different ranges of values to change the brightness. Hence the flickering. It became clear that I had to force the usage of just one, and that’s exactly what the fix in that answer was supposed to do. Except that for some reason it wasn’t working.
After googling further on this, I landed on this page and I saw the list of kernel parameters that had to do with the backlight. I rebooted a couple of times, each time trying a different parameter, and finally,
acpi_backlight=native is what did the trick. I noticed that it doesn’t allow me to change brightness on login screen, but after login, there was no flickering, and when I ran
ls /sys/class/backlight/, I saw that it no longer returned acpi_video0. The only issue I have right now is that there is no fixed minimum. Sometimes, it decreases to a reasonable minimum, while at other times, it results in a blackout, and I have to manually adjust it using the slider in system settings or using xbrightness..
To replicate this process, all you need to do:
- Fire up a terminal
- sudo nano /etc/default/grub
- At the very end of the string GRUB_CMDLINE_LINUX_DEFAULT, (which in my case was “quiet splash,”) add
The final string, in my case, looks like “
quiet splash acpi_backlight=native“
- Close and save the file, and run
sudo update-gruband then reboot.
In the event that this doesn’t work, it’d be worth your time to try out the rest of the kernel parameters. You don’t have to modify the grub file every time. Instead you can choose to modify kernel parameters before boot. This you can do by pressing “c” on the grub screen and typing the desired parameter, in the correct place, right after “splash.”
On January 3rd, I launched a tool, that’s hosted at; yep, that’s right. You guessed it: alexarank.io. What the tool does, is pretty simple. It tracks the global Alexa ranks for domains, and shows the change over time in a chart. It’s not exactly tracking every damn domain on the web, but nothing prevents it from doing so. Except that someone, and I mean anyone who cares enough, has to submit the domain once, and that very instant, the tool would start tracking the domain.
On December 27th, Amin messaged me and shared his desire for tool that would track changes in Alexa ranks for particular domains. On the slightest effort at googling, we both discovered a shitty tool that offered to do so at some price I didn’t even bother to remember. Alexa itself offers to do that for you, but they too, yep you guessed it, do it for a fee. So I say to Amin, we need to make a free alternative, and we immediately start concepting, and after a while we started playing with code. Within the next few days we had a working buggy prototype up, but it was uglier than you could possibly imagine, so we fixed the bugs, and made it look presentable and on the 3rd day of 2016 we registered the domain and it was up.
Am I honestly the only one who thinks that?
- They don’t hover.
- They aren’t anything like a hoverboard is supposed to be.
- People call them hoverboards.
- They are slow and impractical. And therefore useless for the average Marty.
- They cost a ridiculous amount.
I understand that the fact, that we don’t have hoverboards even after the BTTF day and its almost 2016, hurts.. Hoverboards are like the thing everyone expected to just somehow arrive with 2015, and then the year started and we were no closer to getting hoverboards then we were to getting an Android-based Apple device. Even then there was a small ray of hope.. that it wasn’t October ’15 yet, and so the world waited, until we were a day from the BTTF day and that was when we knew that we weren’t getting them, but did that mean we had to go for such a bullshit substitute? The “substitute” that doesn’t even remotely come half as close to the hoverboards in the movie as the first maglev based hoverboard.. or the Lexus one, or the Hendo?
Buy them if you want people, but please don’t call them hoverboards.
A few days back, I was talking to a friend I picked up online, who happens to be an Indian, and as we were discussing some things loosely related to cricket, it hit me.
There’s this whole thing about Pakistan vs India matches. People, adults and teens and kids alike, all gather together some place where the live game is projected on a large screen. There’s something too exciting about this even for those who aren’t into cricket much. There’s this feeling of unity all thanks to this rivalry between the two countries. Note that this rivalry here, that I speak of, is much more of a friendly rivalry, and there’s technically nothing wrong with it, with all those cheeky advertisements and all, it’s all about showing the other country who plays better.
The big idea is that a stadium, funded by both countries, be built, with the consent of the governments of both, where all matches involving Pakistan and India take place
(tough but allow me to add “when possible”), preferably those that the two countries play against each other, and people from both the countries could come and watch them together. …
We have till then to develop the hover-board, while rumor has it that Nike has already started working on the self-lacing shoes. In which case, I guess, it’s also safe to put off the hover-conversion, I’ve been dreaming of for my car, till then.
Check this out before you read any further.
Some time back, I googled the term “sprite”, [I tend to forget what it is (not anymore though)], and in return was presented with a fine number of them, however, my eyes were quick to notice that the very first one of the results featured a few actions of Vegeta, one of the main characters from DragonBall Z. I thought it’d be cool if I tried creating a program, a web-page to be precise where one could animate Vegeta, by merely clicking on buttons.
So well, on May 21, around sunset, i got to work. I googled and checked out a number of sprites until I finally decided on this one.
Though I had initially decided to do it purely using JS, but then it hit me, that CSS3 supports animations so why not use that?
Anyways, as soon as I began, I realized that this sprite is not very well suited to the job and thus I had no choice but to slice it up into several smaller ones, each representing a different move. I started with the ‘stance’. Made a four step sprite which initially was 176×83. And tried animating it. Still the problem that remained was that I couldn’t get it to work like I wanted it to. What was happening was that the Image would just Slide from left to right. After like an hour or two I finally managed to get it right. Turned out that the number of “Steps” I was wasnt well suited to the initial and final positions I had defined in the keyframes. In the end, I managed to get it right. I used 4 steps and set the final keyframes to ‘-widthOfImage.’ That was it. Now all I had to do was to create sprites for the other moves.
Next I tried the “duck-and-punch,” which too worked like a charm with the existing code. But as I progressed, I realized that the size wasnt well suited to all actions. Some required a broader frame, while some (initially) required more than 4 steps, and that was when I somehow messed up the keyframes and wasted another hour fixing them. In the end, I decided to 4 steps where possible, (I actually ommited some of the images), and to use a 300×87 canvas, where each frame was 75×87.
Tried them all one by one by replacing the url in the code. All worked like a charm. Now I moved on to the ‘kamehamehaa'(s). The problem with these was that both the sprites were of 6 steps, so I had put them off for later. However the number of steps didnt make much difference except for the fact that I had to update the number of steps in the CSS , and resize the box everytime.
Now to the UI. What actually happens is that when you click on any of the buttons, it calls out a JS function which replaces the URL of the image with a different one, and thus a new sprite replaces the existing and thus is seen to perform a different action. Also, the function restores the original ‘stance’ sprite after the animation has completed one cycle.
As for the 6-step Kamehamehaa(s), I wrote a different function and keyframes. Now this function though worked in a similar manner, it would update the animation attribute (keyframes and steps) along with the image, and once completed one cycle, it’d restore the original keyframes and image. That was all.
As for the Energy bar, I declared a variable, initially equal to 50. Every Kamehameha decreases it by 10, while the rest increase it by 5. The Bar is updated every time the variable is updated, it’s width being directly proportional to the value of the var.
Google, that started as a search engine that gained global popularity, and parallel to searching, the only thing it offered was a home page light-enough for you to test your internet connection, but that doesn’t at all mean that it wasn’t awesome enough them. A whole lot of people owe their success to it, as it helped numerous youngsters with their homework and school projects, and thus if they are successful now, Google might have contributed a whole lot into their success. According to it’s wiki article, Google started in 1998, but it has now come a long way from being just a search engine.
Then came other products. Gmail, in 2007 or 8, (and something by the name of Google Wave followed too , but it wasnt very successful,) which soon became a rival to Hotmail and YMail. Well, OK, Hotmail isn’t the primary thing of Microsoft’s. MS was the first software company, it brought computers to the world to be consumed by normal people and we respect that. Who cares if Bing never got as popular as Google, MS isnt all about search engines either. As for Yahoo, Okay, Google beat their search engine, but loyal ymail lovers still like it, and well their messenger is still respected more than Google’s talk.
Ah yes, Google Talk eh? Well, this one, a messenger would fall in the same category as Windows Live messenger and Yahoo’s messenger, however, once again, Not very successful. Earlier this year, It was renamed to Hangouts, and perhaps a few newer features were introduced, like using phone numbers to find friends e.t.c. The kind of thing Viber and WhatsApp are famous for, but still, consumers like me prefer sticking to the older, and original ones, i.e. the two mentioned above.
Then there’s the Google Drive. Cloud storage, and rival to Microsoft’s Skydrive. I like and respect both, and both have their pros and cons and in my opinion, both are equal on the whole, plus again, MS isn’t famous for being a provider of cloud storage. Google Docs, an online office productivity suite, the MS counterpart for which is the Office online. Both are great.
Then there’s Google+, a Social network by google, that perhaps isnt very popular as Facebook and twitter, yet the 3rd in terms of preferrence. This can be proved by the fact that most websites, for contact, or upvotes/reccommendation provide three buttons. One from Twitter, one from Facebook, and another from Google+. But still, not exactly on the top eh? Another is Picasa, which is a photosharing cloud-storage, and perhaps might be considered a rival to Yahoo’s Flickr. However, from what I know,Flickr is more popular.
Youtube, the primary place on the web to look for Videos too is owned by Google, though perhaps not started by them, but it’s improving a whole lot, and one cant say that Google didnt contribute to it.
Google Maps, rival to iPhone’s maps, and the primary GPS service of most users nowadays, and it’s streetview is known to have captured some seriously interesting stuff. Google Earth is a similar product but it provides an interactive, fun interface where you can explore the earth. Similar products are Google Mars, Google Moon, and Google Sky.
But that’s not all, these products dont even contribute to perhaps half of Google’s fame. There’s more. In 2008, Google launched Android. A Linux based, free and open-source Operating system for Mobile devices, and the first ever android device to apppear in the market was the HTC Dream. Android gained popularity real quick, and especially with the release of the Galaxy Y and other Galaxy devices, it soon got ‘fan-ned’ by a large percentage of the world, and became a rival to iOS. Android devices are manufactured by the leading company of the present, Samsung, Sony, and HTC, and of course, Google’s own Nexus devices, one of which is launched every year. Could Android kick iOS out of the market? maybe if they play well, as they are in a position to.
The same year, they’d lauched Google Chrome. The top web-browser of the present, that soon became another alternative to internet explorer alongside Mozilla’s Firefox. Well, Internet Explorer’s had it’s day, still respected.
As Chrome got popular and computing moved closer to the cloud, Google seized the chance and launched Chrome OS. Another Linux Based OS, damn lightweight, and this time for slightly more desktop devices. The idea behind Chrome OS was something like this. Every day, millions of users boot into their computers, and once it’s fully loaded, the double-click the icon of their web-browser and start off with whatever they want to do, but dont really do much outside the browser, so they are more or less logging into their OS just to be able to use the browser, so what if your browser was your OS?
Chrome OS’s source code is publicly available, however, it isnt available for download and comes preinstalled in Chromebooks, that are laptops officially built for the OS.
A few days back, Google announced the release of a Chrome Apps launcher for MacOSX. The Chrome Apps were originally a feature offered by the browser. More like extensions perhaps, but slightly more ‘applications’. They are also well integrated into the Chrome OS of course and are the main software for Chrome-OS. The thing about them is that they are totally on the web, they dont have to be installed. All you need is a launcher, which could be either the launcher itself, or the ChromeOS (which has a similar launcher of course) or the chrome web browser.
But they released the launchers for MacOSX and Windows? And I also read that apps are being made for Android and iOS too, however i couldnt find an Android app on the playStore… not yet.
So, as these launchers are out, a number of people might try these, and some, with high speed connections and normal use, might get too comfortable with their apps, that they are gonna try out when they try the launcher, and when they do, the apps would get popular and some users’ use might not extend beyond using these apps, and this is how some might consider switching to Chrome OS itself.. + chromebooks are real cheap and thus attract buyers.
Android is becoming like the primary OS for smartphones, and soon their might come a time, when, like I posted before, they might merge the two projects.
Merging would result in a whole lot of improvements in ChromeOS, and then people might actually start using it, if it has the integration that i look forward to seeing, and using.
Just look how Google is expanding. Started as a search engine and now its the leading company of the web, and with the passage of time, It’s trying on every field or bit there is to IT. Then that Google glass. They can actually monitor, and see exactly what you are doing using that thing. People use gmail as their primary email, Drive to store data, buy domains and hosting from google, walk with a pair of spec on their noses that too was developed by google and records what they see.
It’s like WE the CONSUMERS are being CONSUMED by technology, when it should be the other way round. If anything close to Skynet exists or ever will.. It’s Google.
Two days back, Sony unveiled its Xperia Z1 smartphone, that packs a 2.2Ghz Quad-core processor and a 20.7MP camera, and the same magnificent display, perhaps that’s a little larger too, and that’s the point. Every now and then, a new phone comes out, and it’s pretty hard to choose the best among them, as the competition is tough, but what exactly is it that a day-newer smartphone carries? A slightly better processor? perhaps an extremely high-res camera? or a water proof display? Or…Just a larger screen.
If this is what makes one phone superior over the other, then it’s no competition, considering that one carries better hardware, and thus is bound to perform better. Comparing a dual-core HTC with a Quad-core Samsung, or comparing the Water-proof Xperia with a normal Huawei, or a 41MP Lumia with any camera-ed device is similar to comparing a tank with a Vespa (no offence meant).
When Steve Jobs first introduced the iPhone, in 2007, it didn’t pack a Quad-core either, but it really hit the market, and people fell in love with the idea, cause it was one. The idea behind the iPhone wasn’t to sell Dual-cores or water-proof phones, or integrated cameras, or even fancy touch screens, but it was the Revolutionary UI that set it apart from others. If you dont get it, simply watch the video,( it’s available on Youtube, so i wont bother sharing a link.)
And soon, support for something similar was ported into Android, which arrived a bit earlier than the iOS, and yet, the latter is considered to be superior, (though personally, i love both.) That Android is what powers up most of todays’ smartphones, and the best of which is itself based on Steve Jobs Revolutionary UI.
True that Samsung introduced the ‘smart pause’ and the ‘eye focus’ is another rumored release, but even these two, though a huge leap, aren’t exactly needed, nor are they going to make life better for us.. They are simply building up on the same thing and adding to it, whatever they see, but this won’t do. They are supposed to be building mobile devices but going on the wrong track by enlarging the screen with every release. Similarly, it’s good for the phones to carry a camera, but the phone is not supposed to be a camera. The eye tracking thing of Samsung’s, is in itself, good, but do we really need it? though it sounds pretty simpler, it might put strain to the eyes, and it might be a bit error-prone, and what’s the point of it being in a phone?
The floating touch / motion control, rumored to be in some model of Xperia, and the latter in SmartTVs, is pretty advanced too, but we dont need all this in our phones.. People prefer simplicity. Microsoft could have introduced it in their windows phones ages ago if they wanted too, seeing as we’ve seen something similar yet better in the XBox, but they dont have to or need to. The Simple UI of the Windows Phone 7+ is an example, of how they are keeping it simple, and how people are loving it.
The MotoX introduced the talk to your phone thing, and that was kinda something new, and so was the shake to open camera, though they are minor improvements, but do fall in the category next to the the Huawei’s Screen Temperature, HTC Zoe, and the Samsung’s counterpart to Zoe.
Nowadays, all improvements that are made to the smartphones are enlarging the screen or making it persistent, better RAMs or CPUs e.t.c., or perhaps in terms of better software, but it’s still the same smartphone. Where’s all the creativity, the innovation? Are they really running out of ideas?