35. IMAP (Internet Message Access Protocol) Protocol testing<br />In general, protocol testers work by capturing the information exchanged between a Device under Test (DUT) and a reference device known to operate properly. In the example of a manufacturer producing a new keyboard for a personal computer, the Device under Test would be the keyboard and the reference device, the PC.<br />Network Architecture<br />I<br />n computing, network architecture is the design of a computer network.The design principles, physical configuration, functional organization, operational procedures, and data formats used as the bases for the design, construction, modification, and operation of a communications network. The structure of an existing communications network, including the physical configuration, facilities, operational structure, operational procedures, and the data formats in use. It outlines the products and services required in data communication networks. <br />Browsing<br />w<br />eb browser is a software application which enables a user to display and interact with text, images, videos, music, games and other information typically located on a Web page at a website on the World Wide Web or a local area network. Text and images on a Web page can contain hyperlinks to other Web pages at the same or different website. Web browsers allow a user to quickly and easily access information provided on many Web pages at many websites by traversing these links. Web browsers format HTML information for display, so the appearance of a Web page may differ between browsers.<br />Therefore, Browing is nothing but surfing the webpapes located on the world wide web using a web browser.<br />Some of the Web browsers currently available for personal computers include Internet Explorer, Mozilla Firefox, Safari, Netscape, Opera, Avant Browser, Konqueror, Google Chrome, Flock, Arachne, Epiphany, K-Meleon and AOL Explorer. Web browsers are the most commonly used type of HTTP user agent. Although browsers are typically used to access the World Wide Web, they can also be used to access information provided by Web servers in private networks or content in file systems.<br />Protocols and standards<br />Web browsers communicate with Web servers primarily using HTTP (hypertext transfer protocol) to fetch webpages. HTTP allows Web browsers to submit information to Web servers as well as fetch Web pages from them. The most commonly used HTTP is HTTP/1.1, which is fully defined in RFC 2616. HTTP/1.1 has its own required standards that Internet Explorer does not fully support, but most other current-generation Web browsers do.<br />Pages are located by means of a URL (uniform resource locator, RFC 1738 ), which is treated as an address, beginning with http: for HTTP access. Many browsers also support a variety of other URL types and their corresponding protocols, such as gopher: for Gopher (a hierarchical hyperlinking protocol), ftp: for FTP (file transfer protocol), rtsp: for RTSP (real-time streaming protocol), and https: for HTTPS (an SSL encrypted version of HTTP).<br />The file format for a Web page is usually HTML (hyper-text markup language) and is identified in the HTTP protocol using a MIME content type. Most browsers natively support a variety of formats in addition to HTML, such as the JPEG, PNG and GIF image formats, and can be extended to support more through the use of plugins. The combination of HTTP content type and URL protocol specification allows Web page designers to embed images, animations, video, sound, and streaming media into a Web page, or to make them accessible through the Web page.<br />Early Web browsers supported only a very simple version of HTML. The rapid development of proprietary Web browsers led to the development of non-standard dialects of HTML, leading to problems with Web interoperability. Modern Web browsers support a combination of standards- and defacto-based HTML and XHTML, which should display in the same way across all browsers. No browser fully supports HTML 4.01, XHTML 1.x or CSS 2.1 yet. Currently many sites are designed using WYSIWYG HTML generation programs such as Adobe Dreamweaver or Microsoft FrontPage. Microsoft FrontPage often generates non-standard HTML by default, hindering the work of the W3C in developing standards, specifically with XHTML and CSS (cascading style sheets, used for page layout). Dreamweaver and other more modern Microsoft HTML development tools such as Microsoft Expression Web and Microsoft Visual Studio conform to the W3C standards.<br />Some of the more popular browsers include additional components to support Usenet news, IRC (Internet relay chat), and e-mail. Protocols supported may include NNTP (network news transfer protocol), SMTP (simple mail transfer protocol), IMAP (Internet message access protocol), and POP (post office protocol). These browsers are often referred to as Internet suites or application suites rather than merely Web browsers.<br />Web search Engine<br />A<br /> Web search engine is a search engine designed to search for information on the World Wide Web. Information may consist of web pages, images, information and other types of files. Some search engines also mine data available in news books, databases, or open directories. Unlike Web directories, which are maintained by human editors, search engines operate algorithmically or are a mixture of algorithmic and human input.<br /> Before there were search engines there was a complete list of all web servers. The list was edited by Tim Berners-Lee and hosted on the CERN web server. As more and more web servers went online the central list could not keep up. On the NCSA Site new servers were announced under the title quot;
What's New!quot;
but no complete listing existed any more. <br />The very first tool used for searching on the (pre-web) Internet was Archie. The first Web search engine was Wandex, a now-defunct index collected by the World Wide Web Wanderer, a web crawler developed by Matthew 1828800676275Gray at MIT in 1993. Another very early search engine, Aliweb, also appeared in 1993.<br />Search engines were also known as some of the brightest stars in the Internet investing frenzy that occurred in the late 1990s. Several companies entered the market spectacularly, receiving record gains during their initial public offerings. Some have taken down their public search engine, and are marketing enterprise-only editions, such as Northern Light. Many search engine companies were caught up in the dot-com bubble, a speculation-driven market boom that peaked in 1999 and ended in 2001.<br />Around 2000, the Google search engine rose to prominence. The company achieved better results for many searches with an innovation called PageRank. This iterative algorithm ranks web pages based on the number and PageRank of other web sites and pages that link there, on the premise that good or desirable pages are linked to more than others. Google also maintained a minimalist interface to its search engine. In contrast, many of its competitors embedded a search engine in a web portal.<br />By 2000, Yahoo was providing search services based on Inktomi's search engine. Microsoft first launched MSN Search (since re-branded Live Search) in the fall of 1998 using search results from Inktomi.As of late 2007, Google was by far the most popular Web search engine worldwide. A number of country-specific search engine companies have become prominent; for example Baidu is the most popular search engine in the People's Republic of China and guruji.com in India.<br />Web search engines work by storing information about many web pages, which they retrieve from the WWW itself. These pages are retrieved by a Web crawler (sometimes also known as a spider) — an automated Web browser which follows every link it sees. <br />Exclusions can be made by the use of robots.txt. The contents of each page are then analyzed to determine how it should be indexed (for example, words are extracted from the titles, headings, or special fields called meta tags). Data about web pages are stored in an index database for use in later queries. Some search engines, such as Google, store all or part of the source page (referred to as a cache) as well as information about the web pages, whereas others, such as AltaVista, store every word of every page they find. This cached page always holds the actual search text since it is the one that was actually indexed, so it can be very useful when the content of the current page has been updated and the search terms are no longer in it. <br />This problem might be considered to be a mild form of linkrot, and Google's handling of it increases usability by satisfying user expectations that the search terms will be on the returned webpage. This satisfies the principle of least astonishment since the user normally expects the search terms to be on the returned pages. Increased search relevance makes these cached pages very useful, even beyond the fact that they may contain data that may no longer be available elsewhere.<br />