Wednesday, 2 March 2016

Bill Gates richest in world, Mukesh Ambani at 36th: Forbes

NEW YORK: Microsoft's co-founder Bill Gates continued his reign as the world's richest person with a net worth of $ 75 billion, according to Forbes' annual ranking of billionaires, with Reliance Industries chairman Mukesh Ambani leading the pack of 84 Indian billionaires in 2016.

Forbes' 2016 list of the World's Billionaires includes 1,810 billionaires, down from a record 1,826 a year ago.

Their aggregate net worth is $6.48 trillion, $570 billion less than last year.

Gates remains the richest person in the world with a net worth of $75 billion, despite being $4.2 billion poorer than a year ago. He has been number one for 3 years in a row and topped the list 17 out of 22 years.

Forbes said Ambani, 58, has retained his position as India's richest person despite shares of his oil and gas giant Reliance Industries taking a hit due to lower oil prices.

He is ranked 36th on the list with a net worth of $20.6 billion, the magazine said, adding that "the $62.2 billion (revenues) firm is likely to resume buying crude oil from Iran after the lifting of sanctions."

On the third spot is billionaire philanthropist and Berkshire Hathaway CEO Warren Buffett, Mexican billionaire Carlos Slim Helu (4) and Amazon CEO Jeff Bezos (5).


Ambani leads a pack of 84 billionaires from India, with pharma magnate Dilip Shanghvi (rank 44 and $16.7 billion networth), Wipro Chairman Azim Premji (55 with $15 billion networth) and HCL co-founder Shiv Nadar (88 and $11.1 billion networth) coming in among the top 100 billionaires.


Other prominent Indian billionaires include ArcelorMittal Chairman Lakshmi Mittal (135), Bharti Airtel's Sunil Mittal (219), ports and power magnate Gautam Adani (453), matriarch Savitri Jindal (453), Bajaj Group's Rahul Bajaj (722), Infosys chairman emeritus N R Narayana Murthy (959) and Mahindra Group's chief Anand Mahindra (1577).


Sunday, 28 February 2016

Here is why the latest Alienware 13-inch with OLED screen is the best gaming laptop




New Alienware 13-inch is the first gaming laptop with OLED screen

During CES 2016, the annual trade show, there were several gaming machines put on display, but it appears that Alienware 13 won the crown for possibly becoming the forerunner of how the components gaming laptops are supposed to be present. One of the biggest changes incorporated in Alienware 13 is the fact that it does not feature a traditional display, but an OLED one (optical light emitting diode).
OLED displays are the kind that are present in smartphones and TVs, and they are what refine the rich colors of these products. With an OLED display present on gaming laptops, laptop gamers will also be able to experience rich and vibrant colors while playing the latest gaming titles. With the latest gaming engines, not to mention Microsoft’s DirectX 12 API, you can definitely expect that the lethal combination of OLED displays and next generation graphics will definitely point to a visual gaming experience that could never have been achieved before.
alienware-oled-gaming-laptop-2jpg-0d7077_765w
In addition to displaying rich and vibrant colors, OLED displays also have one other advantage; they are able to refresh the pixels at an extremely fast pace, which results in the reduction of visual stuttering in games. Even with the latest hardware installed, there are some spontaneous moments where you experience visual stuttering for a split second. Well, with OLED gaming laptops, such issues are going to be a thing of the past. The only drawback that Alienware 13 currently features is that it has a very high resolution for a 13 inch gaming laptop.
big_alienware_13_red.jpg
At 3200 x 1800 pixels, it will become relatively difficult for the embedded processor and mobile GPU to deliver respectable frames in graphically intensive titles, but we are sure that as companies like Intel, NVIDIA and AMD keep moving up a generation, they will continue to introduce better chips that will be extremely energy efficient and will be able to push out a lot of pixels with extreme ease in the future.
dsc09521_675403
With the latest display breakthrough, we are confident that lots of manufacturers are going to take notes on this advancement and make preparations to deliver consumers their own OLED gaming notebooks for 2016.

Saturday, 27 February 2016

Apple Is Now Working To Make An “Unhackable iPhone” Even It Can’t Hack

unhackable iphone fbi apple


In response to the latest FBI vs. Apple controversy, the technology company is reportedly working on advanced security measures that would make a locked iPhone impossible to crack. The new measures would even block the methods used by the US government to access the iPhone related to San Bernardino rampage.
The news has come from a New York Times report that cites the sources close to the company and security experts. According to the report, Apple engineers are busy working on the new security measures that will create a significant challenge for the law enforcement agencies — even if the Obama Administration wins its iPhone unlocking fight.
According to the experts, the only way out of this feud is the involvement of Congress. Notably, federal laws bound the phone carriers to make their data accessible to the law enforcement agencies but technology companies like Google and Apple are not covered.
Many of you won’t be knowing that each iPhone has a built-in troubleshooting system that allows Apple to update the system without the need to unlock the phone. FBI is eyeing this feature and trying to force Apple to install a software to break the password of the San Bernardino iPhone.
To make the iPhone completely impenetrable, the company is working to remove this troubleshooting feature. Hence, even with Apple’s assistance, iPhone’s would become impossible to crack. This could also fuel some more legal battles and arguments to rule whether companies like Apple can be forced to help the government in extreme cases.
In a recent statement, Apple CEO Tim Cook reiterated his stance by calling a backdoor into the iPhone “the software equivalent of cancer.” Do you agree with Mr. Cook? Share your views in the comments below.

Friday, 19 February 2016

Samsung to offer 360-degree live stream of Galaxy S7 launch event

Samsung to offer 360-degree live stream of Galaxy S7 launch



As we gear up for the launch of Samsung's next set of flagships, the Galaxy S7 and S7 edge, the company has announced that you'll be able to watch a live stream of the event in 360-degree video. This is a first for Samsung's Galaxy Unpacked event, and the company is going all-out by offering different angles from which to watch the stage:
In addition, during Galaxy Unpacked you will get four options for where to watch—from the auditorium, stage left, stage right or from center stage. Pick whichever spot is right for you, then use the 360 live streaming to get a complete view of the event from every angle.
As far as watching is concerned, there are a few different ways to tune in. If you're watching from a computer, you can do so from the Galaxy Unpacked page on Samsung's website. If you're planning to check things out on mobile, however, you can snag the Unpacked 360 View app on Google Play. Of course, Samsung would prefer you check the event out on a Gear VRheadset, so you can also grab the app from the Oculus Store.
As a reminder, Galaxy Unpacked 2016 kicks off on February 21 at 7:00 p.m. CET (1 p.m. ET and 10 a.m. PT) ahead of Mobile World Congress

Wednesday, 10 February 2016

Tired With Slow WiFi? Your Good Old FM Is Here To Help You

wi-fm-fm-radio-wifi


iving in a crowded neighborhood can seriously affect your WiFi speed. For those who don’t know, you and the people living around you, have a limited wireless frequency channels that are used by the WiFi networks to move the data. So, if there are more people around you using WiFi, there are good chances that the networks will overlap and kill your speed.
This is a very common problem, specially if you are living in a apartment. To solve this problem, Aleksandar Kuzmanovic, associate professor of electrical engineering and computer science at Northwestern University, is trying something new.
He describes the root cause of this problem as none of the WiFi devices have a reference point about the activities of the other WiFi devices. This lack of coordination and timing creates trouble, that results in poor WiFi performance.
To enable the devices to communicate with each other, researchers have developed the first system for WiFi devices that coordinates without any human involvement and operates over FM frequency.
Sharing the information via RDS (Radio Data System) data of FM, WiFi networks can operate with coordination. Talking to All Tech, Kuzmanovic says, “Devices are able to detect that there is this particular repeating structure and hence they are all able to independently come to the conclusion that hey, this must be the beginning of this particular RDS signal sequence that’s repeating in time.”
Thus, the RDS signals act like a clock for WiFi devices that harmonizes the operations of multiple devices. In a recent research paper, the researchers called this technique Wi-FM and outlined one possible scheduling algorithm.

Monday, 8 February 2016

What is the Blue Screen of Death in Windows – The Complete Guide

bsod


SOD or “Blue Screen of Death” is an error message displayed on the screen when Windows OS fails to keep up to its promise of working smoothly without errors, basically, when the operating system crashes due to some reason it shows a Blue Screen of Death.
Let’s know everything about the Blue Screen of Death:

Where it all started?

If you ever try to Google about who created  Blue Screen of Death? You’ll be presented with the name of Microsoft’s ex-CEO Steve Ballmer, but it is not so true as Google thinks, all because of a misinterpretation of a writing by Raymond Chan titled, “Who wrote the text for Ctrl+Alt+Delete in Windows 3.1?”, reported by all major journals like The Verge, Engadget, Business Insider, DailyTech on September 4, 2014.
The writing was about the fundamental task manager software that came into existence with Windows 3.1, and interface similar with that of the  Blue Screen of Death, maybe this was the reason for the misinterpretation. Although, Raymond realized the mistake, and criticized BGR.com for having “entirely fabricated a scenario and posted it as real”, in a publishing on September 9, 2014
BSODs have been there since the time of Windows NT 3.1, and have made their appearance in subsequent Windows versions, as more advancement led to more frequent crashes, OS instability issues, especially in the Windows 9x series, which experienced the most number of BSODs, all due to incompatible DLL files and Kernel bugs.

What causes Blue Screen of Death?

The official name for BSOD is STOP error, caused when kernel level software run into some problem, and it becomes an inevitable scenario without performing a system restart. BSODs generally are an outcome of hardware related errors in a device.
A  Blue Screen of Death screen represents an error code and a custom name assigned to it along with some text written. It may be due to secret actions by some programs we call malware or by some corrupt file in the system, finally leading to a system crash and eventually losing all data in some cases.
Windows creates a minidump file, which stores all the information about a Blue Screen of Death event. Windows uses this minidump file to find solutions to the cause of it. You can use the Windows Event Viewer to see the BSOD information. Later versions of Windows include a complete dump file in which all the contents of memory are copied into the dump file.

Fight with the Blue Screen of Death:

Find out the Malware: Maybe the real culprit causing your system to crash is the malicious software hiding inside your computer. Scan and remove these malware if you encounter frequent Blue Screen of Deaths.
Update the drivers: Always blaming the Malware guys for anything wrong with the device is not good manners, maybe the drivers responsible for the smooth functionality of the hardware become the cause of BSODs. Corrupt driver files can cause BSODs and force your system to perform a restart operation. So, if you see frequent BSODs, do not hesitate to update the drivers.
Boot to Safe Mode: If your device has frequent BSODs then you should boot into the safe mode and check whether the problem persists. You can point out the driver causing the problem as Windows loads only the essential services in the safe mode.
Use System Restore: Shifting your device back to a previous state can be beneficial in such cases as it may remove the cause of Blue Screen of Death.
Hardware Check: Check for memory errors using built-in tools on your machine. Check if the device is having high-temperature levels, use Speccy by Piriform to do so. Faulty hard drives can cause BSODs on a machine. Consult a hardware technician if your memory and temperature are fine, your device may have some other issue.
Reinstall Windows: This is the Lazarus Pit for your Windows on the verge of death.  If you can’t come up with a solution for the BSODs on your machine, then just reinstall your Windows OS get to rid of all those death threats your machine receives every now and then.
Blue Screen of Death has been there and will continue to haunt you. You must prepare yourself a warrior and fight back with an equal force. Show the Blue death, what you are, and what are you capable of, be the savior for your machine.
Okay now, no more heroic thoughts, just keep a bunch of things in mind. Always keep your PC updated, regularly scan for malware and hardware issues, think twice before installing an unknown software, and NEVER turn off your machine directly from the power source as it may corrupt your Windows files. Coming back to the heroic thoughts, be the real hero and take down the Blue Screen of Death.

Saturday, 6 February 2016

How Search Engine Works and Makes Your Life Easier?

how-search-engine-works


A few thousand searches were made in the time this webpage got loaded on your computer. But, does this ever stimulated your neurons, how a search engine works?
How Google serves you the best results at a blink of an eye? Actually, it doesn’t matter until Google, Bing are there. The scenario would’ve been very different if there was no Google, Bing, or Yahoo. Let us dive into the world of search engines and see, how a search engine works.

Peeping into the history

The search engine fairy tale began in 1990s when Tim Berners-Lee used to enlist every new webserver which went online, to the list maintained by the CERN webserver. Until September, 93, no search engines existed on the internet but only a few tools which were capable of maintaining a database of file names. Archie, Veronica, Jughead were the very first entrants in this category.
Oscar Nierstrasz from the University of Geneva is accredited for the very first search engine that came into existence, named W3Catalog. He did some serious Perl scripting and finally came out with the world’s first search engine on September 3, 1993. Furthermore, the year 1993 saw the advent of many other search engines. JumpStation by Jonathon Fletcher, AliWeb, WWW Worm, etc. Yahoo! was launched in 1995 as web-directory, but it started using Inktomi’s engine search from 2000 and then shifted to Microsoft’s Bing in 2009.
Now, talking about the name which is the prime synonym for the term search engine, Google Search, was a research project for two Stanford graduates, Larry Page and Sergy Brin, having its initial foot prints in March, 1995. Google’s working was initially inspired by Page’s back-linking method which did calculations based on how many backlinks originated from a webpage, so as to measure the importance of that page in the World Wide Web. “The best advice I ever got”, Page said, while he recalled, how his supervisor Terry Winograd supported his idea. And since then, Google never looked back.

It all begins with a crawl

A baby search engine in its nascent stage begins exploring the World Wide Web, with its small hands and knees it explores every other link it finds on a webpage and stores them in its database.
Now, let’s focus on some behind the scene technical thoughts, a search engine incorporates a Web Crawler software which is basically an internet bot assigned the task to open all the hyperlinks present on a webpage and create a database of text and metadata from all the links. It begins with an initial set of links to visit, called Seeds. As soon as it proceeds with visiting those links, adds new links in the existing list of URLs to visit, known as Crawl Frontier.
As the Crawler traverses through the links, it downloads the information from those web pages to be viewed later in the form of snapshots, as downloading the whole webpage would require a whole lot of data, and it comes at a pocket burning price, atleast in countries like India. And I can bet, if Google was founded in India, all their money would be used to pay the internet bills. Hopefully, that’s not a topic of concern as of now.
The Web crawler explores the web pages based on some policies:
Selection Policy: Crawler decides which pages it should download and which it shouldn’t. The selection policy focuses on downloading the most relevant content of a web page rather than some unimportant data.
Re-Visit Policy: Crawler schedules the time when it should re-open the web pages and edit the changes in its database, thanks to the dynamic nature of the internet which makes it very hard for the Crawlers to remain updated with the latest versions of the webpages.
Parallelization Policy: Crawlers use multiple processes at once to explore the links known as Distributed Crawling, but sometimes there are chances that different processes may download the same web page, so the crawler maintains a co-ordination between all the processes to eliminate any chances of duplicity.
Politeness Policy: When a crawler traverses a website, it simultaneously downloads web pages from it, thus increasing the load on webserver hosting the website. Hence, a term “Crawl-Delay” is implemented in which the crawler has to wait for a few seconds after it downloads some data from a webserver, and is governed by the Politeness Policy.

High-level Architecture of a standard Web Crawler:

crawler
The above illustration depicts how a web crawler works. It open the initial list of links and then links inside those links and so on.
Wikipedia writes, computer science researchers Vladislav Shkapenyuk and Torsten Suel noted that:
While it is fairly easy to build a slow crawler that downloads a few pages per second for a short period of time, building a high-performance system that can download hundreds of millions of pages over several weeks presents a number of challenges in system design, I/O and network efficiency, and robustness and manageability.

Indexing the crawls

After the baby search engine crawls all over the internet, it creates an Index of all the webpages it finds in its way. Having an index is way better than wasting time finding the search query from a heap of large sized documents, it’ll save both time and resources.
There are many factors which contribute to creating an efficient indexing system for a search engine. Storage techniques used by the indexers, size of the index, the ability to quickly find the documents containing the searched keywords, etc. are the factors responsible for efficiency and reliability of an index.
One of the major obstacles in the path to making successful web indices is the collision between two processes. Say one process wants to search a document and at the same time another process wants to add a document in the index, kind of creates conflict between the two processes. The problem is more worsened by the implementation of distributed computing by the search engines in order to handle more data.

Types of Index

Forward: In these type of indices, all the keywords present in a document are stored in a list. The forward index is easy to create in the starting phase of indexing as it enables asynchronous indexers to collaborate with each other.
search engine index
Reverse: The forward indices are sorted and converted to reverse indices, in which each document containing a specific keyword is put together with other documents containing that keyword. Reverse indices ease up the process of finding relevant documents for a given search query, which is not the case with forward indices.
search engine index

Parsing of Documents

Also called Tokenization, refers to the breakdown of components of a document such as keywords (called tokens), images and other media, so that they can be inserted in indices later on. The method basically focuses on understanding the native language and predicting the keywords that a user might search for, which serve as the foundation for creating an effective web indexing system.
Major Challenges include finding the word boundaries of keywords to be extracted, as we can see languages like Chinese and Japanese don’t generally have whitespaces in their language scripts. Understanding the ambiguity possessed by a language is also a point of concern, as some languages start to differ slightly or even considerably with geographical changes. Also, the inefficiency of some webpages to not clearly mentioning the language used is also a matter of concern and increases the workload on the indexers.
Search engines have the ability to recognize various file formats and successfully extract data from them, and it is necessary that utmost care should be taken in these cases.
Meta Tags are also very useful in creating the indices very quickly, they reduce web indexer’s efforts and eases the need to completely parse the whole document. You’ll find Meta Tags attached at the bottom of this article.

Searching the index

Now, the baby search engine is not a baby anymore, he has learnt, how to crawl and how to grab things quickly and efficiently, and how to arrange his things systematically. Suppose, his friend asks him to find something from his arrangement, what will he do? There are four types of search queries in use, though they are not formally derived, but they have evolved over time, and have been found to sense valid in terms of real life queries made by users.
Informational: This type of queries have thousands of results and cover general topics which enhance the knowledge of the user. For example, when you search for, say Steve Jobs, you’ll be presented with all the links relevant to Steve Jobs.
Transactional: Queries focusing on user’s intent to perform a particular action, may involve a pre-defined set of instructions. For example, How to find your Lost/Stolen Laptop?
Connectivity: These type of queries are not frequently used, they focus on how connected is the index created from a website. For example, if you search, How many pages are there on Wikipedia?
Google and Bing have created some serious algorithms which are capable enough to determine the most relevant results for your query. Google claims to calculate your search results based on over 200 factors like quality of the content, new or old, safety of the webpage, and many more. They have the world’s greatest minds appointed at their Search labs, who do hard calculations and deal with mind-blowing formulae, only to make the Search more simple and quick for you.

Other notable features*

Image Search: You’ll be surprised to know Google’s inspiration behind their famous image search tool. J.Lo, yeah you heard that right, J.Lo and her green Versace(ver-sah-chay) gown at the Grammy Awards, 2000, were the real reason Google came out with its image search, as people were busy Googling about her.
At the time, it was the most popular search query we had ever seen. But we had no surefire way of getting users exactly what they wanted: J­.Lo wearing that dress. Google Image Search was born.
Said Eric Schmidt in his writing titled,” The Tinkerer’s Apprentice”, published on January 19, 2015.
Voice Search: Google was first to introduce voice search on its search engine after a lot of hard work and subsequently other search engines have also implemented it.
Spam Fighting: Search engines deploy some serious algorithms, so that they can guard you from spam attacks. A spam is basically a message or a file that is spread all over the internet, maybe for advertisement or for transmitting viruses. In this matter also, Google guys manually inform the website they find is responsible for spreading spam messages on the internet.
Location Optimization: The search engines are now capable of displaying results based on the location of the user. If search, What’s the weather like in Bengaluru, then the weather stats will be in reference with Bengaluru.
Understands you better: Modern search engines are capable of understanding the meaning of the user query rather than finding the keywords entered by the user.
Auto-complete: The ability to predict your search query as you type based on your previous searches and searches made by other users.
Knowledge Graph: This feature, provided by Google Search, shows off its ability to provide search results based on real life people, places, and events.
Parental Control: Search engines allow parents of small kinds to control what their child has been up to on the internet.
* It is hard to cover the vast list of features provided by these mighty search engines.