Wednesday, 26 October 2011

Your Customized Desktop!

Well .. , I am a huge fan of some dark awesome wall papers..and gizmo gadgets like the ones used in Iron Man! ...well that got me thinking and then I decided to have a little fun getting a new cool desktop customization app called the RainMeter!
My Desktop

So if you love the look of it, or if you want it for yourself, you just have to follow the following steps and make sure you dont miss any of them.

STEP I:  
Choose a good old theme for your desktop.Here it is Dark Gizmology( yup that is my own term guys and girls )
I chose this one !

Step II:
Download The RocketDock from Punk Labs.


Step III:
Download the MAC OSX Carbon Theme for RocketDock.

STEP IV:
Now download RainMeter,a system monitor and customization utility for Windows.

STEP V:
Download the Lifeless Theme for RainMeter.

Well, now you have all the things that you need. I guess its time to start working the things out.
Understanding the use of RocketDock is easy enough, and they give you directions on installing the new theme.Make sure that you don't change its settings too much, I had to get rid of it 4 times before I decided not to mess around with the layering.

Now the tricky part is using the RainMeter! It might seem difficult at first, so my suggestion would be to read the User Guide first! :)
As per request, i had to work to find out if this was possible in MAC OS. Well guys Rainmeter won't work for the MAC users so i have taken the liberty to include a replacement software the GeekTool. Although it is not as accurate( i tried it on my other laptop and it worked fine), achieving this degree of detail MAY not be possible.

Add remove anything you want, as informative or as stylish as it can get !
All the Best :D

Please let me know of you suggestions and feedback both Positive and  negative :) , comment below, or mail me at technostrikers@gmail.com. Any problem please feel free to contact me.
Hope to see your desktops sometime.Have a good day!

Tuesday, 4 October 2011

10 Search engines to Explore the Invisbile Web !!


No, it’s not Spiderman’s latest web slinging tool but something that’s more real world. Like the World Wide Web.
The Invisible Web refers to the part of the WWW that’s not indexed by the search engines. Most of us think that that search powerhouses like Google and Bing are like the Great Oracle”¦they see everything. Unfortunately, they can’t because they aren’t divine at all; they are just web spiders who index pages by following one hyperlink after the other.
But there are some places where a spider cannot enter. Take library databases which need a password for access. Or even pages that belong to private networks of organizations. Dynamically generated web pages in response to a query are often left un-indexed by search engine spiders.

Search engine technology has progressed by leaps and bounds. Today, we have real time search and the capability to index Flash based and PDF content. Even then, there remain large swathes of the web which a general search engine cannot penetrate. The term,Deep NetDeep Web or Invisible Web lingers on.
To get a more precise idea of the nature of this “˜Dark Continent’ involving the invisible and web search engines, read what Wikipedia has to say about the Deep Web. The figures are attention grabbers ““ the size of the open web is 167 terabytes. The Invisible Web is estimated at 91,000 terabytes. Check this out – the Library of Congress, in 1997, was figured to have close to 3,000 terabytes!
How do we get to this mother load of information?
That’s what this post is all about. Let’s get to know a few resources which will be our deep diving vessel for the Invisible Web. Some of these are invisible web search engines with specifically indexed information.

Infomine

invisible web search engines
Infomine has been built by a pool of libraries in the United States. Some of them are University of California, Wake Forest University, California State University, and the University of Detroit. Infomine “˜mines’ information from databases, electronic journals, electronic books, bulletin boards, mailing lists, online library card catalogs, articles, directories of researchers, and many other resources.
You can search by subject category and further tweak your search using the search options. Infomine is not only a standalone search engine for the Deep Web but also a staging point for a lot of other reference information. Check out its Other Search Toolsand General Reference links at the bottom.

The WWW Virtual Library

invisible web search engines
This is considered to be the oldest catalog on the web and was started by started by Tim Berners-Lee, the creator of the web. So, isn’t it strange that it finds a place in the list of Invisible Web resources? Maybe, but the WWW Virtual Library lists quite a lot of relevant resources on quite a lot of subjects. You can go vertically into the categories or use the search bar. The screenshot shows the alphabetical arrangement of subjects covered at the site.

Intute

invisible web search engines
Intute is UK centric, but it has some of the most esteemed universities of the region providing the resources for study and research. You can browse by subject or do a keyword search for academic topics like agriculture to veterinary medicine. The online service has subject specialists who review and index other websites that cater to the topics for study and research.
Intute also provides free of cost over 60 free online tutorials to learn effective internet research skills. Tutorials are step by step guides and are arranged around specific subjects.

Complete Planet

search invisible web
Complete Planet calls itself the “˜front door to the Deep Web’. This free and well designed directory resource makes it easy to access the mass of dynamic databases that are cloaked from a general purpose search. The databases indexed by Complete Planet number around 70,000 and range from Agriculture to Weather. Also thrown in are databases like Food & Drink and Military.
For a really effective Deep Web search, try out the Advanced Search options where among other things, you can set a date range.

Infoplease

search invisible web
Infoplease is an information portal with a host of features. Using the site, you can tap into a good number of encyclopedias, almanacs, an atlas, and biographies. Infoplease also has a few nice offshoots like Factmonster.com for kids and Biosearch, a search engine just for biographies.

DeepPeep

search invisible web
DeepPeep aims to enter the Invisible Web through forms that query databases and web services for information. Typed queries open up dynamic but short lived results which cannot be indexed by normal search engines. By indexing databases, DeepPeep hopes to track 45,000 forms across 7 domains.
The domains covered by DeepPeep (Beta) are Auto, Airfare, Biology, Book, Hotel, Job, and Rental. Being a beta service, there are occasional glitches as some results don’t load in the browser.

IncyWincy

how to use the invisible web
IncyWincy is an Invisible Web search engine and it behaves as a meta-search engine by tapping into other search engines and filtering the results. It searches the web, directory, forms, and images. With a free registration, you can track search results with alerts.

DeepWebTech

how to use the invisible web
DeepWebTech gives you five search engines (and browser plugins) for specific topics. The search engines cover science, medicine, and business. Using these topic specific search engines, you can query the underlying databases in the Deep Web.

Scirus

how to use the invisible web
Scirus has a pure scientific focus. It is a far reaching research engine that can scour journals, scientists’ homepages, courseware, pre-print server material, patents and institutional intranets.

TechXtra

TechXtra concentrates on engineering, mathematics and computing. It gives you industry news, job announcements, technical reports, technical data, full text eprints, teaching and learning resources along with articles and relevant website information.
Just like general web search, searching the Invisible Web is also about looking for the needle in the haystack. Only here, the haystack is much bigger. The Invisible Web is definitely not for the casual searcher. It is a deep but not dark because if you know what you are searching for, enlightenment is a few keywords away.
Do you venture into the Invisible Web? Which is your preferred search tool?
This information was gathered from Stumble Upon and Google.

How Google Works!!

Hello guys...this would be my first blog post ever..so please don't be harsh while judging this article.
Also I ask for both POSITIVE AND NEGATIVE feedback. Thanks.

Google today is an integrated part of our life but have you ever wondered how it actually works.
There are very few of us really know how it really works.So today this post is aiming at solving that QUERY which few would have GOOGLED: how does GOOGLE work?

A background :
In the past 2 years, Google has doubled their workforce  upgraded their its search engine to peed up results, and now answers more queries than Microsoft and Yahoo combined.!
 

ORIGIN:
Some 14 years ago, Larry page, a Ph.D student from Harvard was searching for a name to register his search engine. He rested upon the word "Googol" - meaning the number one followed by 100 zeroes.
However thanks to his friends Sean Anderson's typo error...a new phenomenon called the Google was born.!



.
1. QUERY BOX
It all starts with somebody typing in a request for information about the the nearest branch of your preferred bank, or be it some sms you wish to send to your special one :)  .
2. DOMAIN NAME-SERVERS
“Hello, this is your operator . . . ”
The software for Google’s domain-name servers runs on computers in leased or company-owned data centers all over the world, including one in the old Port Authority headquarters in Manhattan. Their sole purpose is to shepherd searches into one of Google’s clusters as efficiently as possible, taking into account which clusters are nearest to the searcher and which are least busy at that instant. 
3. THE CLUSTER
The request ­continues into one of at least 200 clusters, which sit in Google-owned data centers worldwide.
4. GOOGLE WEB SERVER
This program splits a query among hundreds or thousands of machines so that they can all work on it at the same time. It’s the difference between doing your own homework assignment alone as opposed to asking the entire class to do different questions and compiling them at the end.! .
5. INDEX SERVER
Everything Google knows is stored in a massive database. But rather than waiting for one computer to sift through those gigabytes of data, Google has hundreds of computers scan its “card catalog” at the same time to find every relevant entry. Popular searches are cached—held in memory—for a few hours rather than run all over again.Yup that means everything!:P .
6. DOCUMENT SERVER
After the index server compiles its results, the document server pulls all the relevant documents—the links and snippets of text from its massive database. How does Google search the Web so quickly? It doesn’t. It keeps three copies of all the information from the internet that it has indexed in its own document servers, and all those data have already been prepped and sorted.


7. SPELLING SERVER
Google doesn’t read words; it looks for ­patterns of characters, be they in English or Sanskrit. If it sees your requested pattern a thousand times but finds a million hits for a similar pattern that’s off by one character, it connects the dots and politely suggests what you probably meant, even while it provides you the results, if any, for your fat-fingered query for “hwedge funds or even the abrriviation for a word.”


8. THE MONEY MACHINE:GOOGLE AD SERVERS
Each query is simultaneously run through an ad database, and matches are fed to the Web server so that they’re placed on the results page. The ad team is in a race with the search team. Google aims at  delivering  all searches as quickly as possible; if ad results take longer to pull up than search results, they then don't make it onto the results page—and Google does not make any money on that particular search. 
9. PAGE BUILDER
The Google Web server collects the results of the thousands of operations it runs for a query, organizes all the data, and draws Google’s cunningly simple results page on your browser window, all in less time than it took to read this sentence.
10. THE END OF THE USER STORY: RESULTS
Often in 0.25 ­seconds or less.
CLUSTER CONTROL
Google’s genius lies in its net­working software, which helps thousands of cheap computers in a cluster act like one huge hard drive. Those inexpensive computers allow Google to replace parts ­without stopping the whole show: If a computer drops dead, there are at least two others ready to take its place while an engineer swaps out the busted machine.
ITS ALL ABOUT POWER
Just about the only thing limiting Google’s performance is how much electricity the company can buy. One of its newest data centers (code name: Project 02) is near the ­Columbia River in The Dalles, ­Oregon, which has access to 1.8 gigawatts of cheap hydroelectric power; not coincidentally, this is where ­major internet hookups from Asia connect to U.S. networks. The byte factory has two computing centers, each the size of a football field.
THE MEMORY BANK
Based on the few numbers Google releases, experts guess that at least 20 petabytes of data are stored
on its servers. But Googleytes are ­famous for understatement; Wired says Google may have 200 petabytes of capacity. So how much is that? If your iPod were just 1 petabyte (one million gigabytes), you’d have about 200 million songs to shuffle. And if you started downloading a ­petabyte over your high-speed internet connection, your great-great-great-great-grandchild might still be around when the last few bytes get transferred, in 2514.
PAGE RANKINGS
Google decides how reliable a site is—and thus how important the site’s content will be when Google forms a list of search results—by considering more than 200 factors as it analyzes content. But the secret sauce is Google’s patented formula for following and scoring every link on a page to learn how different sites connect, which means a site is deemed reliable based largely on the quality of the sites that link to it. 
GOOGLEBOTS
Google deploys programs called spiders to build its copies of the internet. On popular sites, Googlebots may follow every link several times an hour. As they scour the pages, the spiders save every
bit of text or code. The raw data are pulled back into the cluster, run through the mill, and scheduled to incrementally replace the older data already on the ­index and doc servers, ensuring that results are fresh, never frozen.
So thats how the world's most popular search engine actually works...Next time you type in a query just think what and where your result went...and before you can finish thinking you would have the result before you. Its not something thats happening between your internet provider and your PC or laptop!!!ITS A GLOBAL PHENOMENON!!
 
Design by Free WordPress Themes | Bloggerized by Lasantha - Premium Blogger Themes | Online Project management