Thursday, October 4, 2012

How to Use Bounce Rate as a Quality Guide

What Is a Bounce Rate?
A bounce occurs when a user visits a website and exits the page from which they came without having moved to another page on the same website.
A higher bounce rate means users leave your website without viewing any additional pages. A lower bounce rate means users stay on your website and have moved to a different page within your website.
Keeping track of your bounce rate is a fantastic quality indicator to determine which pages are working for you. Before you become alarmed, know this: a high bounce rate does not indicate failure! A high bounce rate might be the nature of your website.
Referential vs. Content-Driven Websites
A referential-driven page or website presents information solely for the purpose of providing an authoritative and unbiased resource. Online referential resources may include dictionaries, glossaries, timelines, encyclopedias, etc., and these pages are often static (i.e., they do not require new material to stay fresh). Referential-driven websites typically have higher bounce rates because that's the nature of the website - readers search for the reference material, find the website, and leave.
Content-driven websites must stay fresh and constantly provide relevant material. Examples of content-driven websites include article directories, blogs, and news pages. A higher bounce rate on this type of site may be an indicator of poor quality, irrelevant content, poor navigation, and more.
Expert Authors Should Aim for a Low Bounce Rate
As content providers, Expert Authors who provide high quality content should aim for a low bounce rate (50% or less) and focus on increasing the average time the visitor spends on the site. Unlike referential-driven website providers, content-driven websites want visitors to stay on the website and continue clicking through to different articles or areas of informative content on the website.
Using tools like Google Analytics, gather your pageviews, visitors, visitor duration, and bounce rate. Compare pages and determine why one page may be performing poorly and why one page may be performing well. How was the quality of writing? Navigation issues? Keyword selection? Ads on the page? Length of the article?
To reduce your website's bounce rate and increase the time on the page/site, use the pages that performed well as a template or theme to emulate on other pages while still providing original content. Additionally, try the following strategies:
  • Update your website with quality content frequently
  • Try out features important to your audience
  • Create interactivity (i.e., two-way flow of information between you and your audience)
  • Give your audience a reason to stick around (e.g., resources, quizzes, games, etc.)
  • Routinely check your links to ensure they are working properly
  • Continue driving traffic to your website via quality articles, social media, etc.
Please note: Don't be discouraged by a high bounce rate if visitors stay on the page longer than average (approximately 2 minutes). It could indicate the reader found their information, read the article, and left. However, if a content-driven page is clocking under a minute with a high-bounce rate - let these be key indicators that improvement is needed.
Understanding the dynamics and performance of your website is the key to success. Not only will you be able to target your audience by providing the content they want both in your articles and on your website, you will save yourself from the agony of wondering what is wrong with your website and your articles.
A National Repository on  Maternal Child Health developed my me has a very good average Bounce Rate of 16%.
Questions? Comments? Visit this post online!

Thursday, July 19, 2012

FREE! Read Indian graphic novels online


Utter the words ‘Internet’, ‘comics’ and ‘free’ in the same breath, and the words ‘online piracy’ crop up inadvertently. But don’t worry; we’re not urging you to download off Piratebay. We’re talking about graphicindia. com, a website dedicated to Indian graphic novels. The comic site features works by writers Samit Basu, Samik Dasgupta and Siddharth Kotian among others. But here’s the best part — you can read an ongoing series for free, one chapter a week.

Author Samit Basu says, “The model of free comics is nearly 60 years old. In America, since the beginning of the genre, you’d get free promotional comics. DC even has a ‘free comic book day’. The idea is to take the same model online.” Samit’s newest novel, titled Unholi, is a sort of apocalyptic take on Holi. Brilliantly (and somewhat eerily) illustrated by artist Jeevan Kang, it depicts a zombie attack in Delhi on the day of the festival. For now, you can read the first four chapters online, while Samit works on the rest of it.

Other interesting reads worth a look include a Ramayana set in the year 3392AD and a horror series called Untouchable. There’s also Devi, a graphic novel about a goddess who looks more like a superhero in her leather overalls. A comic by Stan Lee (who co-created Spider-Man, Ironman and Thor, among others) about a superhero in Mumbai is one of the most awaited novels on the website.

So, if all the content is available for free, how do the novels (and publishers Liquid Comics, who run the website) make money? “In a digital age, the greatest value for any content company is to build loyal audiences,” feels Sharad Devarajan, CEO and founder of Liquid Comics. Sharad and his team intend to build their audience-base through free content and hope to have them buy a physical copy or the e-book subsequently.

With the launch of specialised comic book libraries in the city and the Comic Con taking place in three cities across the country, has the audience for the genre grown or does it remain as niche in India? Sharad says, “Five years back, comics were still perceived to be kids products. Now, a new generation of Indian creators have begun expanding the boundaries of the medium.” Samit adds, “Right now, the market is building slowly. It’s free, so the readership is big. When you have to pay for it, it’ll be a whole different story.”  Irrespective, for those of us who love a good graphic novel, there’s no reason to complain. Graphic Details

Friday, June 29, 2012

Digital object identifier (DOI) becomes an ISO standard

A new International Standard that provides a system for assigning a unique international identification code to objects for use on digital networks is expected to bring benefits for publishers, information managers, multi-media distributors, archive and cultural heritage communities, and the internet technology industry.
 
Published by ISO (International Organization for Standardization), ISO 26324:2012, Information and documentation -- Digital object identifier system, is an efficient means of identifying an entity over the Internet and used primarily for sharing with an interested user community or managing as intellectual property.

A DOI name is an identifier of an entity – physical, digital or abstract – on digital networks. It provides information about that object, including where the object, or information about it, can be found on the Internet.

Applications of the DOI system include (but are not limited to) managing information and documentation location and access; managing metadata; facilitating electronic transactions; persistent unique identification of any form of any data; and commercial and non-commercial transactions.

"Unique identifiers (names) are essential for the management of information in any digital environment,” said Dr. Norman Paskin, Managing Agent of the International DOI Foundation, and Convenor of the ISO group that developed the standard. “Hence the DOI system is designed as a generic framework applicable to any digital object, providing a structured, extensible means of identification, description and resolution.”
“The DOI system is designed for interoperability: that is to use, or work with, existing identifier and metadata schemes. DOI names may also be expressed as a URL, an e-mail address, other identifiers and descriptive metadata.”

ISO 26324:2012 gives the syntax, description and resolution functional components of the digital object identifier system. It also gives the general principles for the creation, registration and administration of DOI names.

The DOI system was initiated by the International DOI Foundation (a not-for profit member-based organization initiated by several publishing organizations) in 1998. The International DOI Foundation is the Registration Authority for ISO 26324. To date, some 60 million DOI names have been assigned to date, through a growing federation of Registration Agencies around the world.
More information is available on the Website of the International DOI Foundation, including a list of frequently asked questions: www.doi.org

ISO 26324:2012, Information and documentation -- Digital object identifier system, was developed by ISO technical committee ISO/TC 46, Information and documentation, subcommittee SC 9, Identification and description. It is available from ISO national member institutes (see the complete list with contact details). It may also be obtained directly from the ISO Central Secretariat, price 92 Swiss francs respectively through the ISO Store or by contacting the Marketing, Communication & Information department.

Media Contact

Elizabeth Gasiorowski-Denis
Elizabeth Gasiorowski-Denis
Communication Officer and Editor of ISO Focus
Communication Services

Thursday, June 28, 2012

LIS UGC NET Questions and Answers June 2012

1. Roget Thesaurus is Classified manner
2. TCP/IP is a Internet Protocol
3. In AACR2 Part A section 12 deals with Serials Publication.
4. In Key word Indexing Author keyword Index is WADEX
5. Odd one in ANSI, BIS, BSI, ESPN
6. Internet acting as a-.

7. BIOS is a booting software
8. Scale and projections are seen in Geographical Sources
9. Segmentation associated with Information marketing
10. Stop word list used in Automated keyword Indexing
11. Relation between two subject is Phase relation
12. Redundancy in information is Unwanted source should be eliminated \
13. Internet filtering is a internet Censorship
14. Information network for Universities and colleges in UK is JANET
15. Shodhaganga is E-thesis repositories

16. First university started a M Phil program IS Delhi University
17. Head Quters  of ISLIC located at Kolkatta
18. "Minmal, Middling And Maximum theories given by Samuel Rothestein
19. Coden used In Serials
20.Calculation of Impact factor by Web of Knowledge and Scopus
21. Science citation index was published by Thomson Reuters
22. WIPO is Unites Nations organizations
23. Information literacy term coined by P. Zurkonski(1974)
24. Information Power Published By...........
25. Bulletin board Service in Internet is Blog
26. Scalar Chain Means Authority Structure
27. Students influenced by internt here Student is a dependent Variable
28. RFID used in Circulation and Security
29. Brown charging system easy to handle and recordable
30. Information explosion and cost makes resource sharing.
31. Square bracet []used in AACR2 for other external sources
32. Vidyanidhi is a Institutional repository
33. Xerography also called..Electrophotography
34. Solomon Four group design is Pretest posttest control group design
35. Blair and Moron model for Stairs project
36. Ideographic hypothesis means Individual
37. Who has published the book entitled " Information Power: Building Partnership for Learning"?
     American Association of School Librarians[AASL]
38. Heart  and Soul of Indian Constitution?
      Right to Constitutional Remedies
39. Inference Engine is available in?
      Expert System
40. The protocol used to transfer file from one system to anther system is? 
       File Tansfer Protocol(FTP)
41. In 1974, Zurkowski has given which term?
      Information Literacy
42. For what purpose, Solomon Four Group Design is used?
      Quantitative Analysis i.e. data analysis



Sunday, June 17, 2012

Why Library Catalogues are not as user-friendly as Search Engines?


Traditional library card catalogues are data-centered ‘handicrafts’ with lots of rigid rules controlling their access and descriptions and hence naturally very much under-used. Since the legacy is continued in modern Online Public Access Catalogues(OPACs) as early OPACs functioned like digital version of card catalogs, end-users also continued to admire library card catalogs and OPACs as ‘handicrafts’ than understand and use them extensively. Whatever limited use made of them is more for searching known-items and/or as adjuncts to library circulation system than as aninformation retrieval tool. Interestingly, many studies have reported that large majority of users prefer to browsing books on the shelves of libraries than browsing library catalogues.


Search Engines intuitively captured the imagination of end-users with many simple and easy to understand features in information discovery and access. User-centric design, self-service, seamlessness, natural language search, fuzzy search, autosuggestion of search terms, spell-check, auto-plurals, auto-word truncation, showing similar items/pages, relevance ranking, popularity tracking, interaction and feedback, provision for varieties of filtering and browsing, etc. are the features users got acquainted from Search Engines. They never expected users to undergo information literacy trainings and not even to have a search strategy or prepare a complexsearch query, but allowed users to enter whatever natural language words come to their mind in a search box with a ‘search’ or ‘go’ button adjacent to it to click and execute without the burden of knowing field tags, Boolean operators or data structure and so on. As a matter of fact, unlike OPACs, by default they did not restrict the search terms to select fields even though that is an option available and this feature greatly increased the relevance of search results. 


Search Engines went to the extent of automatically deciding, as soon as two or more keywords are entered in the search box, either to execute as a phrase search or Boolean ‘AND’ search. Some clever Search Engines execute both in sequence, i.e., first as a phrase search and then as Boolean AND search if the resulting hits are below certain pre-specified number .Once the search results appeared, Search Engines effectively capture the attention of users with relevance ranked presentation, option to change the criteria for ranking like ‘latest first’ (i.e., by date of publication). The display of results by Search Engines is more convenient and comfortable than that from OPACs. Search Engines guide users, in a simple way, not only to modify the search and re-execute with the search box and keywords intact but also enable them to narrow down the search result through various filtering options like subject, author, format, date, language, etc. They also liberally allow users to play with result pages and items by moving back and forth and marking and demarking, downloading, e-mailing, sharing, exporting and processing and doing many more things. In addition, users can access and collate similar records with ‘more like this’ feature for a given author or subjects One welcome development in the recent past is that library OPACs are trying to imitate Search Engines and introduce external links to images, full text, TOC, summaries, author information and reviews of retrieved documents. Classification scheme, cataloging code and controlled vocabulary (thesaurus) are the three ‘sacred tools’ which pre-occupied librarians more than anything else over acentury. But Librarians did not bother to check the acceptance of these tools by end users. Present day Search Engines do have thesauri and taxonomies in the backand help users to map their natural language keywords so that end users are immensely benefited but without taxing them to know what is happening (or how it is happening) in the back. 

Some of the rigid cataloging rules, the process of delineating metadata elements as access, descriptive and administrative data elements, are no more relevant on Web. As such, AACR, MARC and otherstandards have appeared more as limitations from user perspective than a user-friendly service. On the other hand, Search Engines have effectively re-purposed these data elements to add value to service and grown with the changed and expected user behaviour. Library catalogs are also changing, but slowly. For example, the extent of data mining done by Search Engines cannot be compared with Circulation and OPAC modules of any library management software.


In a nutshell, rule-based data-centric design of OPACs turned out to be Librarian-friendly; where as user-centric design of Search Engines are immensely user-friendly. OPACs are no match to Search Engines as for as user-empowerment and minimal consumption-skill requirements are concerned. Of late, Federated Search Engines, in their effort to provide one-stop digital service to users, face challenges in integrating diverse OPACs and different sets of databases within the same OPAC. It is heartening to note that the new J-Gate 2 has many features of a powerful Search Engine and is forging ahead to enhance it soon with even Federated Search Engine features to search in one go all your digital resources including OPAC.

Saturday, May 12, 2012

How Google Search Engine Works





First of all it's important to recognize how Google works. Essentially Google has a huge database that it refers to and which it builds automatically with the help of spiders and robots. These spiders are PHP scripts (programs running on the Google server) which surf the web as we do by following links from one page to another. When they find a page, they will then attempt to categorize it by looking for words that are repeated often (or just a couple of times) and by looking at what terms are associated with the page on other sites (in the links for instance). The order is dictated by which site is the most relevant, as well as which seems to be the best quality based on external references etc, and you will then be given a list based on these factors.


In short then Google looks for matches in your text as well as similar themes on the page. It doesn't (yet) understand what you are asking it and so you shouldn't phrase your query as a question. Searching 'How do I go about fixing my PC?' isn't going to be very useful.

The other reason this isn't going to be very useful is that the language is too chatty (meaning that it will occur on fewer sites a that's not the way everyone speaks) and because it's too general meaning the results won't necessarily help you with your specific computer problem.

What you need to do then is to think of the short snippets of code that are likely to appear on a relevant page and then search for that. So in other words if your monitor won't turn on you should type in 'monitor won't turn on' or 'monitor not turning on' which is likely to match a lot of discussions on forums as well as a lot of other pages.

Better yet for something like that is to search for any specific error codes if they come up, as these will be reprinted by people on forums etc who are asking for advice.


Understanding the SERPs

'SERPs' are the pages that come up when you do a search – 'Search Engine Results Pages'. These include your results as well as some other things. The top few listings for instance will be adverts and these will have a very feint yellowish background behind them – look out for this to identify the ads.

Meanwhile you will see a little listing underneath each result. This snippet of text comes right from the page and this is your best tool for deciding whether that page is providing the right information before you go ahead and click it. Be discerning here and you can save yourself a lot of time.

James Delurno's work with a registry cleaner company has taught him about how badly people are using Google to find their software. He mostly writes about technology and computer maintenance topics but decided to write this one about using Google effectively.

Recommend this

Related Posts Plugin for WordPress, Blogger...