Thursday, November 29, 2012

Reading Notes: Dec. 3

Reading 1 - http://www.noplacetohide.net/

-The government has a deep investment in online information tracking
-Metadata left in imprints could be a possible reason for the lack of internet privacy

Reading 2 - http://epic.org/privacy/profiling/tia/

-With the Terrorism Information Act, government gained unchecked access to personal materials over electronic networks
-News leans towards this being an abuse of federal or gov't power
-Costs of this act has far outweighed its need

Reading 3 - http://greatlibrarynews.blogspot.com/2008/09/myturn-protecting-privacy-rights-in.html

-Legislation has now allowed for libraries to transmit certain aspects of patrons' information
-This is an overreach which has been rejected and compromised in many locations
-This raises serious questions of ethics as libraries can be put in the middle of government-citizen disputes or government monitoring
-Libraries could become less trusted if the public views their information as insecure there

Muddiest Point (11/26-12/2)

Muddiest Point (11/26-12/2)

Can digital library initiatives cut the need for physical institutions, re-envisioning the funding model of library systems, making it more efficient to focus budgets on systems that require little man-power upkeep?

Thursday, November 15, 2012

Reading Notes (11/16-11/23)

Reading Notes 11/16-11/23:

Reading 1: Web Search Engines: Part 1   and Part 2

-Over past decade, Google, Yahoo, MSN have been the dominant search engines analyzing and processing more information than any previous search engine
-The Amount of data contained in these search engines lies in the range of 400 Terabytes
-Basic search processing revolves around tracking whether links have been visited, sorting them by relevance, referred to as crawling algorithms
-As the internet has grown, these search engines need to adapt their algorithms to react to issues such as increasing speeds and duplicated links
-To adjust to the rising costs associated with these speed increases, GYM have used a series of prioritized links that, are usually most clicked and will show at the top of search queries
-This also helps reject spam
-Using index algorithms, these search engines are able to process incoming documents and information
-Index algorithms focus on certain search aspects like keywords and phrases or a combination of factors to provide a more relevant result
-Query Processing Algorithms use the particular search words entered in the search to provide only results containing all inputs
-Real Query Processors attempt to sort links based on topical relevance
-To increase speed, these processors exclude or censor certain links to cater to user preferences and caching important user data

Reading #2: Current developments and future trends for the OAI protocol for metadata harvesting. Library Trends, 53(4), 576-589.


-The Open Archives Initiative was created to help manage access to diverse scholarly publications as they were transferred to online and digital formats

-The OAI was begun and adopted in 2001
-Works through basic alterations to XML and HTTP code
-Many OAI environments have switched to domain-specific setups with examples including individually run OAI environments in archives and museums
-To provide completeness of results, institutions have developed methods to create inventories and generate responses
-These registries make up the backbone of OAI retrieval
-Extensible Repository Resource Locators are examples of how XML documents can be manifested within OAI
-Challenges to OAI still exist in the form of metadata variations, and varying formats amongst the data
-Currently developers are trying to edit OAI's to adapt to access restrictions and best practices to connect the institutions that dominate the OAI landscape

Reading #3: http://quod.lib.umich.edu/cgi/t/text/text-idx?c=jep;view=text;rgn=main;idno=3336451.0007.104


-The Deep Web consists of the pages that make up the un-searchable internet, including off-tracking links, out of service web pages, regionally restricted pages, or privately held pages on secure websites

-It is believed that the Deep Web is comprised of information many times over that publicly accessible on the World Wide Web
-Search Engines sometimes attempt to draw from the information and pages on the Deep Web, often with varying results
-Many Deep Web pages are hidden from search engines due to their very shallow scanning of the World Wide Web for results
-Problems of retrieving these hidden resources are rooted in the fact that most of these resources are un-indexed making retrieval nearly impossible
-Thus Searching must be changed to account for these untapped resources
-Deep Web information could cover the overwhelming majority of search queries
-Managing these resources has turned into an argument of "micro" vs "macro" searching



Muddiest Point (11/12)

Muddiest Point (11/12)

Can Digital Libraries alleviate the strain on traditional institutions by appealing to donors and funding agencies?

Friday, November 9, 2012

Reading Notes (11/09-11/12)

Reading #1 - http://www.dlib.org/dlib/july05/mischo/07mischo.html

- Major problems in digital librarianship is keeping up with the plethora of methods through which digital materials are published
-Finding a link between these various methods has been most difficult task
-Interest in Digital Libraries began with government funded studies during the early 1990's
-68 Million dollars in grants were dispersed among 6 university research projects from the NSF
-Despite the early flow of funding, the development of digital publishing has far outpaced the research done in digital libraries
-Federation of materials and resources among institutions is the current best practice in developing digital libraries

Reading #2 - http://www.dlib.org/dlib/july05/paepcke/07paepcke.html

-In 1994, the NSF launched the Digital Library Initiative
-Combined Librarians, Records Managers, and Historians to develop possible Digital Library solutions
-Computer Scientists saw it as a great way to distribute information, Librarians saw a new way for increased funding for innovation and research
-Despite early success, the growth of the World Wide Web challenged these institution based projects
-Copyright restrictions in the DLI prevented great use of the WWW in research and development under federal grants
-CS experts welcomed the use of the WWW, but librarians understood the underlying issues of distributing information without check
-Still, because of the speed associated with the WWW, digital librarians still have a place in providing reference to those looking to narrow research and search results

Reading #3 - http://www.arl.org/bm~doc/br226ir.pdf

-The drop in online storage costs and the growth of Internet use have led to interesting possibilities for developing digital libraries
-They focus mainly on connecting online institutional repositories, compiling databases of resources that can be accessed en masse and at any time
-Many institutions have combined with technology powerhouses like HP to develop the technology and software necessary to support digital based repositories
-Institutional repositories are ways which library systems provide access and information to larger communities through quick online or digital distribution
-These are necessary to adapt to the quickening of information distribution facilitated through the World Wide Web and online repositories like Wikipedia or publication databases
-It also provides a way for scholars or instructors to provide information to students in the short academic terms without needing to acquire alot of physical materials
-Greatly facilitates increases in scholarly materials and speeds up the academic process
-Issues still occur, such as the "watering down" of scholarly information as more and more is produced, and also issues with copyright infringement inherent to published materials challenge quick and easy distribution
-Because of this trends are constantly changing to adapt to the speed of internet demand and its reaction to the traditionally protected status of paper published materials,

Muddiest Point (11/09/12)

XML Muddiest Point (11/09/12)

Are new languages based in XML gaining a significant user base to continue it's development?

Friday, November 2, 2012

Reading Notes (11/4-11/10)

Reading Notes 11/4-11/10

Reading 1 - https://burks.bton.ac.uk/burks/internet/web/xmlintro.htm

-XML stands for Extensible Markup Language
-XML allows users to bring multiple files together
-Provides processing control information to supporting programs, such as web browsers
-Has no predefined set of tags
-Can be used to describe any text structure (book, letters, reports, encyclopedias, etc)
-Assumes documents are composed of many "entities"
-Uses markup tags to allow readers to more easily understand document components
-Most tags have contents, although empty tags can serve as placeholders
-Unique Identifier attributes allow communication between two separate parts of document
-Can also allow for incorporation of characters and text outside of standard databases
-Can identify tables and illustrations and their positioning
-XML was developed for easier navigation and storage in databases

Reading 2 - http://www.ibm.com/developerworks/xml/library/x-stand1/index.html

-XML 1.0 is the base XML technology, building on UniCode
-Defines strict rules for text format and allows Document Type Definition (DTD)
-Only the english production is considered standard
-XML 1.1 fundamentally alters the definitions of characters to allow them to adapt to changes in unicode specification
-Also provides for normalization of characters by referencing the character model of the World Wide Web 1.0
-XML is based on the Standard Generalized Markup Language
-Article provides sections giving third party internet tutorials
-XML Catalogs is used to set guide for XML entity identifiers defined by their Uniform Resource Identifiers
-URI is similar to web browser URLs
-Namespaces in 1.0 is a mechanism for universal naming of elements and attributes
-Namespaces in 1.1 updates to support for URIs
-XML Base increases efficiency of URI resolution
-X Link provides ability to express links in XML Documents
-Schematron provides a top down management method for users

Reading 3 - http://www.w3schools.com/Schema/default.asp

-Schema describes structures of XML Documents
-Allows easy manipulation of XML language
-Simplifies Database

Reading 4 - http://xml.coverpages.org/BergholzTutorial.pdf

-HTML can not define content
-In terms of Syntax, XML is very similar to HTML in form
-DTDs define the structure of XML documents
-DTD elements are either terminal or nonterminal
-Unlike HTML links, XML can be two-way
-THe Extensible Stylesheet Language is a form of template
-XML Schema works on replacing DTDs and defines datatypes
-XML is so important and popular because it is very versatile more like a family of languages than a single structure

Muddiest Point 10/29-11/4

Muddiest Point 10/29-11/4

No Muddiest Point.

Friday, October 26, 2012

LIS 2600 Reading Notes for 10/26

Reading Notes for 10/26

CSS Tutorial:

-Allows the user to get a very basic feel for the usage and application of CSS in HTML formats
-Tutors on basic tasks such as adding a specific format, color, style, or links
-Shows that with CSS it really streamlines the HTML writing process, allowing simple substitutions and saving time
-Because it provides a template sample, this can serve as a jumping off point when beginning a CSS project
-Provides examples of issues that are encountered and their basic root and solutions
-A very useful tool in beginning to understand CSS and it's relationship to HTML

Cascading Style Sheets Chapter 2:

-CSS allows HTML to mark up in terms of their formatting
-Makes the whole process more user friendly
-CSS pages can be created using style sheets, or built in software by web developers
-Style sheets function as rules, creating a template for CSS creation
-In design, CSS is based around the concept of speed and brevity
-In the past, CSS required browser support, however currently nearly all web browsers include  CSS support
-The top down management system in CSS is referred to as a tree
-Branches are indented to represent physical items in the page
-Most items will be inherited, or after entered once will repeat to save time
-Color and background are the most common aspects used for CSS
-Other common tasks include font style and margins
-Compared to most other web building CSS gives a defined structure allowing venture web builders an easier interface to use

Muddiest Point 10/20-10/26

Muddiest Point 10/20-10/26

Has the popularity of HTML web building simplified the process to the point where interchangeable visual representations in web building are now an option to cater to differing user bases?

Friday, October 19, 2012

Reading Notes (10/22-10/28)

Reading Notes (10/22-10/29)

Reading 1: http://www.wired.com/images/multimedia/webmonkeycheatsheet_full.pdf

-Lays out basic cheatsheet for tags within HTML creation and editing
-Provides basic details for tagging within areas such as the header, text, formatting, forms, graphical elements, and inserting links

Reading 2: http://www.w3schools.com/HTML/html_intro.asp

-Stands for HyperText Markup Language
-Structured around series of markup tags
-Usually tags are structured in pairs to open and close, or begin or end
-Web Browsers read and translate HTML into graphical interface
-On 5th Generation of HTML although it isn't in heavy use
-<!DOCTYPE> is tag which allows decoding of reading of page
-HTML can be edited with third party software from companies such as Microsoft and Adobe
-An HTML element is everything from the start tag to the end tag including displayed information
-HTML can have empty elements
-Attributes are additional information included in tags during creation process
-Attributes are always contained in the start tag and should always be in quotes
-Headings are defined by <h1> and are numbered in order of priority
-Paragraphs are defined by <p> and may also contain line breaks
-Formatting is used to make text alterations such as bold <b> or italics <i>
-Hyperlinking is defined by <a> and often contains additional attributes
-Cascading Style Sheets are used to emphasize syntax within a production
-Images are defined with <img> and often come from internet resources
-Tables are created with <table> or <tr>
-Lists can be created with structure or not <ul> or <sl>
-Layouts of HTML webpages are a collection of the previously mentioned tags ordered through <div> tags
-Colors are coded though RGB codes
-These codes are separated by name or code value
-URL is the link to an HTML page

Reading 3: http://books.google.com/books?id=l_MFZYMv3YgC&pg=PA15&lpg=PA15&dq=introduction+to+html+pratter&source=bl&ots=nXRgMFYZHz&sig=muV0UY1c_ePZO1pcdu8_V_IdbwQ&hl=en&sa=X&ei=Mvs4ULG9O4Gf6QG8h4GICw&ved=0CC0Q6AEwAA#v=onepage&q=introduction%20to%20html%20pratter&f=false

This chapter of Web Development with SAS by Example expands on the basic topics in the previous reading, giving an in-depth explanation of the processing and in-depth examples and templates of how an HTML production should look during creation.  Provided as well are graphical outputs showing how the actual code and tags connects to the visual interface of the web browser, giving the student/user a good look at how to effectively encode an HTML page.  Also explained within are the differences between XHTML and HTML and their uses and procedures, as well as the effectiveness of various media formats, such as .gif, .png, .wmv within the interface.  Again, coupled with the previous resource, these are the foundation for understanding and implementing HTML.

Reading 4: Goans, D., Leach, G., & Vogel, T. M. (2006). Beyond HTML: Developing and re-imagining library web guides in a content management system. Library Hi Tech, 24(1), 29-53.

T    This text examines the usage of HTML within Content Management Libraries through a series of case studies.  Starting with pre-HTML, low security library systems, the article goes on to analyze the benefits of HTML in such a system.  In building a CMS, it will focus on managing the day to day activities of reference in the library web guide, including titles, subjects, authors, etc.  The main question presented is whether to use a commercial structure or an in-house production.  The benefits of commercial CMS are their efficiency, but the main drawback are high costs for third party software and IT managers.  For in-house productions, the costs are much lower, however there is a less refined feel and understanding when it comes to the final production.  The case study focused on in this article is the implementation of a HTML CMS at Georgia State University, starting in 2002.  They focus on their use of proprietary Microsoft MySQL Software which provides an intricate networking of programming and code that makes up their CMS.  The example of GSU provides a template, the case study states, showing the development and evolution of the program, going from a relatively basic structure to an intricate system of reference management with a complex virtual interface.  The authors contend that this model can be used for other, less funded institutions to create their own CMS without ridiculous expenditures.
tr

Muddiest Point 10/15-10/21

Muddiest Point 10/15-10/21

Will the incorporation of internet browser plug-ins effectively spell the end of the software era, if all programs can be integrated to work within a browser format?

Friday, October 12, 2012

LIS 2600 Muddiest Point 10/8-10/14

Muddiest Point 10/08-10/14
In the development of internet networking, mainly the switch to TCP/IP networking, were there more efficient systems that fell to the wayside due to costs of maintenance and upkeep?

Friday, October 5, 2012

LIS 2600 Reading Notes Week 6

LIS 2600 Reading Notes Week 6 (Oct.8-Oct.14)

Reading 1 - http://en.wikipedia.org/wiki/Local_Area_Network

-Local Area Networking interconnects computers within a limited area (home, school, etc)
-Characteristics are high-speed transfer, small geographic area, and leased telecommunication lines
-Ethernet and Wi-Fi are most common distributors
-Developed in Professional World (Xerox and Chase Bank) in the 1970s
-With Personal Computers, LAN grew to meet standards of saving storage space and to communicate
-TCP/IP has become LAN standard
-Originally cables were Coaxial but evolved into Fiber Optics
-LAN usually functions in a series of switches connected to a modem or router
-Multiple LANs can connect through Lease Service or the Internet

Reading 2 - http://en.wikipedia.org/wiki/Computer_network

-A group of computers or hardware connected to allow sharing of information and resources
-One device is sending information, one is a remote device
-Can be measured through data transfer media, communications protocol, scale, and topology
-Communications protocols define rules and data formats for exchanging information in a network
-Internet Protocol and Ethernet are two media
-Developed from research projects in computer technologies (50kb/s transfer maximum)
-Today Telephone and Internet are comprised solely of Networks
-Computer networks: Facilitate Communication, Permit Sharing of files and other types of information, share network and computing resources, may be insecure due to ease of access, Interfere with other technologies, and can be difficult to set up
-Can also be classified through hardware and software technology (electrical cable, optical fiber, and radio waves)
-Wired Technologies: Twisted Pair Wire, Coaxial Cable, ITU-TG. hn, Optical Fiber
-Wireless Technologies: Terrestrial Microwave, Communication Satellites, Cellular and PCS Systems, Radio and Spread Spectrum Technologies, Infared Communications, and Global Area Network
-Exotic Technologies: IP over Avian Carriers, Radio Waves
-Communications Protocol - Set of rules for exchanging information over a network
-Ethernet - Family of protocols used in LAN within a standard of IEEE 802
-Internet Protocol Suite - TCP/IP - foundation of all modern networking - defines all areas of access
-SONET/SDH - Used for transfer of multiple digital bit streams over optical fiber through lasers
-Asynchronous Transfer Mode - switching technique for telecom networks - diminishing in favor of more modern networks
-Personal Area Network, used for transfer in one network
-Local Area Network - Limited Geographic area, based on ethernet technology
-Metropolitan and Wide Area Network - expands LAN over larger geographic areas
-Virtual Network - traffic flows between virtual machines employing virtual protocols
-Star and Bus Networks are most common in ethernet and Wireless LAN
-Hardware Components include Bridges, Routers, Hubs and Network Adapter Cards

Reading 3 - Coyle, K. (2005). Management of RFID in libraries. Journal of Academic Librarianship, 31(5), 486-489.

-RFID - Radio Frequency Identifier - consists of computer chip and antenna printed on a medium
-Unlike barcodes, does not have to be visible to be read
-RFID isn't a single technology, there are hundreds of different products on the market
-Used in drug production and to prevent online piracy
-Excels in economic market, still developing in the library trade
-Hard to apply to Library Science because of perceived "throw away" nature
-Security issues also create issues for practical use
-There are positives, allows for quicker and easier scanning of multiple items
-Provides exact calculations of use and timing
-Libraries rarely see return on investment, making it tough to apply because of it's high-tech expensive nature
-Only practical application would be self-checkout removing reference aspects needed in the library
-Tough to apply to delicate materials such as pamphlets and magazines
-Also has problems reading on digital media such as magnetic or metal disks
-If technology can adapt to be changed without reapplication, it is very viable
-Conclusion: developing, yet potential is wide ranging for library field

Muddiest Point Week 5 (Oct. 1-Oct.7)

Muddiest Point Week 5 (Oct. 1-Oct. 7)

With metadata, can information be accurately analyzed in an ever-changing binary landscape?  If so, will technological obsolescence cause irreparable damage to previously recorded or produced metadata?

Thursday, September 27, 2012

LIS 2600 Reading Notes (Oct.1-Oct.7)

LIS 2600 Reading Notes (Oct. 1-Oct. 7)

Reading 1 - http://www.getty.edu/research/publications/electronic_publications/intrometadata/setting.html

-Metadata = "data about data"
-Important aspect of the digital world as we try to discern the reliable from unreliable, in terms of information
-Information Objects have three characteristics - Content, Context, and Structure
-Content - intrinsic information within an object
-Context - Who, What, Where, How, and Why aspects
-Structure - Formal Set of Objects
-Archives and Museums are slower in deciphering metadata due to a slow adoption of standards and reference best practices
-Metadata with digital resources becomes about preserving the coding and information, rather than the representation
-Categorizing metadata relies on the information's function such as: Descriptive, Technical, or Use
-Major life cycle of an Information Object: Creation, reuse -> Organization and Description -> Validaton -> Searching and Retrieval -> Utilization and Preservation -> Disposition
-Attentive Metadata can lead to enhanced efficiency in information searches
-Understanding of Metadata can allow institutions to track their digital holdings in terms of legal rights
-Metadata accrues over time and the more we have the more we can discern about any given information object


Reading 2 - http://dublincore.org/1999/06/06-overview/

-The Dublin Core Metadata Initiative is concerned with offering wide-ranging methods of interpreting and describing worldwide Digital Resources
-Requirements for the DCMI - Internationalization, Modularization, Element Identity, Semantic Refinement, Identification of encoding schemes, Specification of Controlled Vocabularies, Identification of structured compound values
-Works on basic assumption that owned information need to be defined to be accessed
-Has a designed set of programming and database 'semantics' to track information
-Based on Rules of XML
-DCMI has semantic refinement to ensure universality
-Vocabulary in DCMI allows for contextual decoding
-DCMI faces problem of adapting existing languages to an international standard


Reading 3 - http://www.hsl.unc.edu/Services/Tutorials/ENDNOTE/intro.htm

-EndNote is the leading bibliography software on the internet
-Output in EndNote allows you to change reference formats and styles
-Clicking same heading twice will reverse order inputs
-EndNote is efficient because it saves files periodically to catalog works
-EndNote compresses files for easy use
-EndNote Library allows you to store multiple similar references in a single collection
-Can use online search to combine references from other sources
-EndNote can combine references from other compatible databases
-Cite While You Write allows you to add references to word documents on-the-go
-EndNote Style Manager allows you to cater to reference specific publications by converting them to the desired output
-EndNote X5 can automatically update reference to new scholarly standards


Muddiest Point - 09/24-09/31

Muddiest Point - 09/24-09/31

When building and managing databases, is it possible, after joining multiple tables, for primary keys and foreign keys to switch roles, or assume new roles?

Wednesday, September 19, 2012

Reading Notes (Sept 24-30)

Reading 1: http://en.wikipedia.org/wiki/Database

-Databases are organized collections of data
-Software includes Oracle, SQL Server, and Microsoft Access
-XML is current most popular database file system
-Can be both physical or digital, usually refers to digital in current society
-Cloud Databases are the new and upcoming style format for digital databases
-Database efficiency depends wholly on computing power of output device
-Security enacted through encryption
-Database Management Systems are the method through which we decrypt and understand database materials
-Models include Hierarchical, Network, Inverted File and Relational
-Originally Databases grounded in physical storage, now more done in clouds and internet storage

Reading 2: http://en.wikipedia.org/wiki/Entity%E2%80%93relationship_model

-Refers to abstract ways to describe databases
-Diagrams often used to extract information processing in database systems
-Three-headed approach:Conceptual, Logical, and Physical models
-Items are related to each other through series of key words (married, performed, etc)
-Revolves around ownership and possession
-Diagramming tools include MagicDraw and dbForge
-Cannot be used for semi-structured data
-Studies show, limitations have caused model to be less favored in relation to Enhanced Entity-Relationship Models, particularly in the business world

Reading 3:http://www.phlonx.com/resources/nf3/

-First Normal Form - No repeating elements
-Repeating elements are items with similar attributes and interferes with atomicity of database
-Atomicity - indivisibility of attributes into similar parts
-In NF1 each row of data must have a unique identifying key
-Referred to as Primary Key
-Second Normal Form -No Partial Dependencies on a Concatenated Key
-Focused on the Orders Table Structure
-Order_date and Order_id columns are the key problems in the Second Normal Form, often missing and disrupting database systems
-By switching orders to single-column primary key, NF2 can be appeased
-Third Normal Form - No Dependencies on Non-Key Attributes
-Sorting by secondary attributes causes issues when retrieving data from the database
-Removes context and can disrail the transfer of information
-Only primary relevant attributes can be used to recall and construct databases


Muddiest Point (Sept 17-23) Media Storage - LIS 2600

Muddiest Point (Sept.17-23)

In an ever-changing storage media world, can a universal industry standard ever be created using a single, or decode-able and translatable, storage source?

 Patrick Trembeth

Thursday, September 13, 2012

Muddiest Point (Sept. 9-Sept.16) for LIS 2600

Muddies Point Question:

If, under Moore's Law, technology exponentially increases in efficiency, shouldn't digitization processes therefore have become more cost effective within the past decade, as what was once rare and very expensive is now industry standard as technology has developed?

Sunday, September 9, 2012

LIS 2600 Week 2 (Sept 10) Digitization Reading Notes

Lied Library @ Four Years: Technology Never Stands Still

As technology continually advances, with stronger and faster computing, so must the institutions which advertise their ready availability.  This articles uses UNLV's Lied Library, opened in 2001, as an example of how the landscape in a technology environment evolves over short periods of time and many of the issues and problems that pop up as a result of the rapid advancement.  With newer tech, physical problems arise as the stronger computing makes for more intense problems in the building itself, evidenced by the example of Lied's air conditioning problems in keeping up with the temperatures outputted by strong computers in the building.  Also there are problems that arise in security as the internet landscape constantly changes, with anti-virus protection at a premium, crippling the efficiency demanded from such a high tech institution.  Finally there are also problems in dealing with the rapid advancement of the software itself, which must be managed to work smoothly among operating systems and hardware, evidenced by glitches in Adobe software and printing problems.

European Libraries Face Problems in Digitalizing

As we move into the digital age, problems arise in making analog items digital in terms of funding.  Innovative ways must be developed to pay for the rights to items on the road to digitization as well as pay for the price of digitalizing the items themselves.  Pay-per-access is one popular method for viewing items in the digital world, opening the possibility for good profit from the process.  Also available are subscription services that allow access to vast numbers of items for a set rate per time period.  Both options provide a way to recuperate digitization fees, but it is still an evolving field with many more possibilities and unanswered questions.

A Few Thoughts on the Google Books Library Project

Focuses on the complexities and advantages of transferring Analog to Digital among the Google Books Library Project.  Gives credit for making a vast amount of material available to many more people than would be possible through the slow/cumbersome process of using physical libraries for research with the stipulations of rarity and availability that plagues this medium.  Also commends the sustainability of digital resources as they do not break down, however criticizes the apathy this creates for physical items and the trouble of converting digital binary to physical literature.  Finally puts a focus on the struggle of getting younger generations to understand the usefulness of "old" books, and the task of educating new generations on using both forms of research and information attainment.