Thursday, December 9, 2010

Muddiest Point

I personally believe that the YouTube video titled “Explaining Cloud Computing” was probably the weakest piece of material. As I often tend to claim, I am not saying it was bad or useless, but merely the weakest in comparison to the others. I do appreciate the fact that it is capable of giving the user a general idea as of how the technology really works. However, I did feel as though that Galen Gruman had already accomplished this goal through his article, with the addition of some other pieces of information to further explain its importance. Putting those circumstances into consideration, perhaps we could have sufficed without the YouTube video. Then again, I already knew about how cloud computing works, which was due in part to my experience taking “LIS 2000: Understanding Information” in my first semester at this master’s program. I probably might as well been taking a brief glance at both of them and easily claim that I could suffice without either of the sources made available.

Week 14: Comments

Comment #1: http://lis2060notes.wordpress.com/2010/12/06/reading-notes-dec-6-for-dec-11/

Comment #2: http://nancyslisblog.blogspot.com/2010/11/readings-notes-unit-14.html

Week 14: Organizational Computing, Cloud Computing, and the Future

Although electronic records and digitization have enabled the means to store so much information in such little space, we are faced with harsh reality once again as we realize that they too have their limitations. As a result of the Internet and all the technological innovations that came along with it, the issue of storage has been resolved (probably just for now). This breakthrough would be known as “cloud computing.” Because the hard-drives individuals and/or organizations have in their possession is not enough to digitally store the materials they want to maintain, especially when it comes to keeping extra copies handy, cloud computing can serve as a secondary source to store information. Through access to the Internet, these digital sources can easily be transferred into those distant locations for safe-keeping, while users are able to save more room in their own hard-drives for more digital sources of information. This sort of phenomenon would come especially useful for libraries. The libraries have learned to bring the latest technological breakthroughs to their advantage by incorporating them into their services, which so happened to include access to digital copies of the digitized books they are able to maintain. It would seem logical to include cloud computing so that the staff has a secondary source handy to maintain the digital copies of the digitized books and/or the digital records/files. Either way, the utilization of cloud computing should allow libraries to make more room for their own hard-drives, or to ensure the availability of a back-up in case of an emergency.

I am reminded of a chapter from Albert-László Barabási’s book “Linked: How Everything is Connected to Everything Else and What it Means for Business, Science, and Everyday Life” once again. When Al-Qaeda executed the tragic events of September 11, 2001, the terrorists responsible for launching the attacks on the Twin Towers anticipated that some kind of a domino effect was to occur, i.e. once a symbol of the Western World’s economic power would be destroyed, everything else would be collapse with it; thus weakening their status in every possible sense. What the criminals did not anticipate was that the World Trade Center utilized a decentralized system to conduct their affairs, which enabled the others to easily take over where the one that had shut down left off. Cloud computing can in itself abide by a similar example, so long as organizations have a secondary source handy to store extra copies of their digital/digitized files. Whatever may happen to the organization, the cloud computing solution that had been utilized can come to the rescue with the back-ups. What happened on that horrific day is a testimony that a decentralized system can allow sources of information to withstand any catastrophe. However, the scenario only applies to digital/digitized materials/copies. Because the format calls upon a greater need for the preservation of the physical copies, for the sake of further ensuring the survival of the information, it should be mandatory to print out the files to compliment the services that cloud computing provides.

Tuesday, November 30, 2010

Muddiest Point

I believe the websites we were given were the weakest bits of reading for the week. I was able to figure out what sorts of issues about which both of the sources were trying to inform me, but I often prefer receiving my information as articles. When it comes to observing websites as sources, I often I have to go through the trouble of going all over place, picking up on the details from one location after another, and putting the content into a nutshell. For the case of the articles, all the details I need to know about are already in front of me, rendering the information well-prepared for me to analyze; thus saving for myself a lot of hassle. The “EPIC Terrorism (Total) Awareness Page” website was nothing but links to other articles and I did not know how far I was supposed to read. As for the “No Place to Hide,” I did not know as to whether I was supposed to be analyzing the content of the website or the book to which it appears to be devoted. Either way, I really wish I had something more straightforward to read instead.

Discussion Topic: Online Privacy

The biggest fear in regards to privacy is not really the notion of being watched, but what those at the other end are capable of doing with information pertaining to the individuals under surveillance. When it is an issue of national security, how threatened people tend to feel all depends on the level of trust they are willing to offer to their government. The only way that authorities are able to track down the suspects through advanced technology is by maintaining the anonymity of those they are inspecting, while implementing proper strategies to counter the possible threats. Yet it is not always a government that would want to gather information on the people, but the businesses. Since big businesses are always competing for achieving higher quality, they obviously feel the same way about finding employees who the employers believe are capable of delivering that level of potential. By bringing social networks to their advantage, employers are able to conduct the background checks so as to determine the value of those they are putting into consideration for hire. Whether it is bigger governments or bigger businesses monitoring over everyone, these circumstances leave people in a desperate situation where they must defend their reputations to whatever extent necessary. The best way to fight back is by fighting fire with fire. Rather than submitting to the technologies that are already in wider use (which in turn ensures more power to those who distribute the products), more innovation needs to be encouraged amongst the people. As people learn to be more tech-savvy and promote their products, they are able to establish a greater sense of independence from the big businesses, as well as the big governments. As more become independent, there is also more competition. As more competition comes about, there is a lesser chance for one to impose its will on all.

Week 13: Comments

Comment #1: http://lostscribe459.blogspot.com/2010/11/week-13-reading-notes.html

Comment #2: http://christyfic.blogspot.com/2010/11/reading-notes-week-13-dec-6-2010.html

Week 13: IT Issues: Security and Privacy

Because the latest breakthroughs in technology allows just about any piece of information to be created, published, and distributed for the general public to see, there is more to consider than the risk of coming across junk. One of the most frightening aspects of the Internet that has been coming about is the risk of our privacy being compromised. Since any piece of information can easily make its way through the Internet, some of the content that is bound to become more visible to a wider range of users may include information about individuals. People have every right to feel afraid about their personal backgrounds being made public, especially when there is the issue of how the authorities may react. In order for criminals and terrorists to remain one step ahead of the law for the sake of carrying out their plans with greater success, they needed to adapt to more sophisticated methods. As they further ensured their survival, the government obviously became more paranoid. On account that certain individuals with an antisocial personal disorder are capable of hiding in plain view, secretly spying on everyone seems like the most logical approach to detect the suspects and uproot the perpetrators. Because there are always possibilities that the reasoning of those carrying out the investigations can be overtaken by a paranoia from the surroundings, a lust for power, or other factors that lead to corruption, just about anyone is prone to being labeled as a suspect. Putting such a factor into consideration, it should be rendered as justifiable for libraries to refuse disclosing information about their patrons. When the law is pursuing those engaging in antisocial activities and a lead would point to a library, it is crucial for the staff to cooperate. Yet if there are suspicions that the investigation is to be conducted in a reckless manner (i.e. utilizing the most unconvincing reasons to potentially prosecute any innocent civilian), only then it would seem appropriate for the library to stand its ground. Reassurance must be given that the investigators are to abide by a genuinely proper procedure in carrying out their tasks so the rights of the patrons would still be respected.

Based on what I can remember from one of my Information Technology courses during my junior year as an undergraduate student, it is often the governments that are the first to utilize the latest technological breakthrough that occurs. As soon as the innovation occurs, the predecessor becomes available to the general public (with the big companies often being the next in line). The United States government is obviously not exempt from this concept, especially when in times of war. It is important to have the most advanced technology readily available and at one’s disposal for the sake of being able to out-maneuver the enemy with greater ease. The outstanding technologies we have today were developed as a result of the most major world events that ended up taking place during the 20th century, which were World War I, World War II, and the Cold War. Had it not been for the global impact they made, we probably would not have the technological luxuries that we have today. However, all this seems to be coming at a price. As the times change, so does crime and the threats it imposes. With terrorism officially becoming the new military threat to civilization, there is a new kind of challenge being faced. Since the participants of such activities are capable of carrying out their plans in a manner that is becoming increasingly difficult to detect out in the open, our government felt compelled to spy on our own people just to pick up on potential leads. The irony to this entire scenario is that we condemned the Soviet Union for their surveillance on civilians (which was especially the case for the Stasi in East Germany) and Nazi Germany for devastation it inflicted, and yet this nation had to experience the ordeals of the McCarthy trials, the Vietnam War, the Patriot Act, and the War in Iraq. It is becoming apparent that the power people have garnered through the use of updated technology is gradually turning them into everything they have always hated. It is a matter of time before they will be rendered as no different, only to be replaced by a new form of a superpower, who themselves will be going through a similar process all over again.

Monday, November 22, 2010

Muddiest Point

I thought the weakest of all the readings was the article, "Using a wiki to manage a library instruction program: Sharing knowledge to better serve patrons" by Charles Allan. I am not saying it was useless. I found it very informative and, considering the topic that the class needed to focus for the week, I knew for sure it fit right in there. The problem was that we were also required to watch a video from TED, titled “How a ragtag band created Wikipedia” by Jimmy Wales. Because the video explained how useful of a purposed Wikis tend to serve (not to mention Wikipedia is often the first thing that pops into people’s minds whenever the technology is being mentioned), I felt as though there was probably no need to bring up the article by Allan. But then again, I tend to absorb more information whenever I am listening to the content rather than reading it, so that could explain why I felt as though I was able to grasp more information about Wikis from the Wales video rather than the Allan article.

Week 12: Comments

Comment #1: http://lostscribe459.blogspot.com/2010/11/week-12-reading-notes.html

Comment #2: http://rjs2600.blogspot.com/2010/11/readings-for-11-29-12-3.html

Week 12: Social Software

One particular habit in which members of the scientific community need to constantly engage is the use of journals or logs for their work. By keeping track of all their activities throughout the day, scientists are maintaining and updating a source that can serve as a reference for them. Through the utilization of the web-log, or blog, not only does the technological innovation enable the scientists more sufficient means for recording the events of each time period in accordance to their entries, but also to easily publish the information and to allow others to observe the details of their studies. The drawback to the data is that they have been recorded in an off-hand manner through the perspective of an individual. This in turn can render the source as not quite presentable and fall short from being considered as professional. That is why there are wikis to encourage more collaboration within the scientific community. As members are able to bring their observations to the table, each individual takes part in assembling the information and editing it in a manner to ensure coherence and accuracy. However, just because these sources are being made available, it does not necessarily mean they would be accessible. Of course, there are always search engines to help locate the source, but unless the user knows the title of an article or its publisher, the information basically remains lost within a shuffle. That is why there is the practice of folksonomy to help narrow down on the search. By providing the means to label sources based on the topics with which readers may tend to associate them, the tags serve as an alternative option for retrieving the types of information that users seek within the confines of a specific subject. Such a task could be utilized by the scientific community, but it should be intended more for the general public. It is for their sake the information needs to be made available and accessible, which means the people should be entitled to label the sources in which they see most fitting. Although these technological innovations provided for members of the scientific community more efficient means to gather, assemble, publish, and distribute their research, even those tasks should not be made exclusive to those individuals in particular. As a result of Wikipedia, the general public has not only the power to organize sources of information their way, but to also create them. The website permits people to conduct their own researches in certain fields and contribute their own articles. As a result of more sources of information from the scientific community that have become more widely available over the Internet, users have been given more opportunities to access the content, which enables them to create the kind of material that a website like Wikipedia is seeking to fulfill its purpose.

I am being reminded of the book “The Meaning of Everything: The Story of the Oxford English Dictionary” by Simon Winchester. Some of the earlier versions of the English dictionary were developed single-handedly, as was done by Samuel Johnson and Noah Webster. Although the English-speaking world does indeed owe a debt of gratitude for their devotion and hard work, for obvious reasons their contributions were simply not sufficient enough. When it came to the development of the Oxford English Dictionary, a completely different approach was to be utilized. The original members of the staff who were responsible for establishing the project had decided that instead of just taking up the responsibility themselves, the general public was also to be involved. People were requested to submit their list of words, as well as their definitions, while the staff consulted each other on what to accept and reject and what edits need to be done. From there, they were to be compiled, organized, and then assembled. The parallel should be pretty noticeable between how this endeavor came about and how Wikipedia made its impact. Just like the development of the Oxford English Dictionary, the creators of Wikipedia knew that the job of assembling a reliable source for information is no easy task for a group of a few individuals; hence why they believed more sufficient results could be achieved by turning to the general public for assistance. In comparison to how Encyclopedia Britannica presents itself, Wikipedia offers a lot more flexibility. Whereas the former collects its information very selectively and has the articles created and assembled by a devoted group of scholars in a very professional manner, the latter allows just about everyone to contribute whatever pieces of information they want and can assemble the sources in a similar manner. Although it appears as though Wikipedia functions in a chaotic manner, the staff is smart enough to realize that a certain degree of order always needs to be maintained. Because the staff is always looking through articles to check for accuracy and neatness, this is a clear indication that the professional model often observed by the older generation (as is for the case of Encyclopedia Britannica, of course) has never been abandoned, or at least not in its entirety. Even though the manner in which most people tend to gather and publish information via Wikipedia may not be as professional as how scholars perform their duties for such sources as Encyclopedia Britannica, as long as Wikipedia gives readers a general idea about every topic that is available (as what any other encyclopedia attempts to accomplish), then it is successfully fulfilling its purpose.

Tuesday, November 16, 2010

Muddiest Point

I believe the article by Sarah L. Shreeves, Thomas G. Habing, Kat Hagedorn, and Jeffrey A. Young, titled “Current developments and future trends for the OAI protocol for metadata harvesting. Library Trends,” was probably the weakest piece of material for the week. I am not saying it was uninformative or incomprehensible in any sort of way. Indeed, I was able to figure out the core elements of the article and how it was related to the other two articles. Then again, I did study the subject before, so that could explain why I was able to catch on. The reason why I believe this article in particular seemed like the weakest is because when I compare it to the others, I notice that they included visuals as a means to depict how the devices they are describing tend to function. The article we were given about Metadata does not provide such aids, which leaves the readers to use a little more effort to figure out how it works. Putting those factors into consideration, it would only seem fair that an article on this topic should follow the example of the other two by utilizing visuals of its own to better explain how it works.

Z39.50 at Zoom@ Pitt and OAI-PMH at NSDL

I am able to confirm noticing the quirks that “Z39.50 at Zoom@ Pitt” and “OAI-PMH at NSDL” are capable of demonstrating. For the case of the former, I think the reason why some of the databases tend to function more slowly than the others is probably because of the amount of content that is being handled. As a database needs to manage more content, it obviously has to take more time to maneuver through the increasing collection. If the load was any lighter, the database would not experience as much of this problem. And for the case of the latter, because the organization was created via a government agency, the space and proficiency it is able to afford is practically unlimited, or at least compared to what the other can achieve. However, this is not to suggest that it would be without fault. It is likely that the website could be using more than enough resources than what is potentially needed. As a result of those circumstances, so many options would end up being created for the users, only to end up narrowing down on the same sets of documents that are actually available in the website over and over again. On the other hand, these observations are all based on my own personal assumptions.

Week 11: Comments

Comment #1: http://adamdblog.blogspot.com/2010/11/unit-11-reading-notes-11-22-2010.html

Comment #2: http://acovel.blogspot.com/2010/11/unit-11-reading-notes.html

Week 11: Web Search and OAI Protocol

See this link: http://att16.blogspot.com/2010/11/week-11-web-search-and-oai-protocol.html

Tuesday, November 9, 2010

Muddiest Point

I believe the article by Clifford A. Lynch was probably the weakest piece of material. What led me to state this opinion is the fact that when I compare it to the other two, it seems rather wordy. The first two articles from “D-Lib” seem a lot more straight-forward in terms of trying to understand what the topic of this week is all about. Because the information from the third article did not appear to demonstrate that sort of brevity, I had a bit more difficulty absorbing the core aspects. Of course, I am not claiming the article was bad, let alone useless. It was still informative and I managed to figure out how it was related to the other readings. All I am saying is that if the first two articles were to provide information in a rather straight-forward manner, then it would seem logical that the third one should be presented in a similar way. Other than that one particular issue, I did not really have that much trouble observing the content from any of the readings.

DiLight System/NYPL DL

The websites we were given have clearly demonstrated how the search for items within libraries have been simplified. Whereas the organization of finding aids in their physical copies can be a much more tedious process, the digital copies prove to be a lot more efficient. Through the efficient use of the digital finding aids, the searches have shown that the libraries have sources of information in all kinds of different formats, ranging from books, to audios, to DVDs. What enables this sort of solid, yet flexible, structure is the database within the computer system. The database keeps track of all the records that indicate what items are properties of the library. The records provide all the information regarding the item that it represents. Since the information also includes where the actual items are located, the records can also function as finding aids. Of course, that is under the assumption the represented items happen to be in their proper places.

Week 10: Comments

Comment #1: http://rjs2600.blogspot.com/2010/11/readings-for-11-15-11-19.html

Comment #2: http://pittlis2600.blogspot.com/2010/11/week-ten-reading-notes.html

Week 10: Digital Library, Institutional Repositories

Because the introduction of digitization, as well as wider use of the Internet, changed the way sources of information can be formatted and organized, it was only a matter of time that libraries, a haven for sources of information, had to adapt to these technological breakthroughs. However, in order for the incorporation to work, the current staff within the libraries simply could not implement the tasks alone, especially with their increasingly out-dated methods. The situation called upon computer scientists and their field of knowledge to collaborate with the librarians. As a result of the cooperation, the transition became a success, which in turn established and made use of the digital library. As the digital library became well-recognized along with the increasing popularity of the Internet, more libraries had to keep up with the times by adapting to these technologies. By maintaining a collection of sources in their digital format and making use of the Internet as a means to keep them available, libraries were able to continue satisfying the needs of the general public, who are readily engaging in different methods to obtain information. Yet the libraries alone should not be burdened with the task of making digital/digitized sources available to the general public. The universities also contain a treasury for sources of information within their archives. Because the academic system also needs to keep up with the times as much as the public library system, it would seem logical that those institutions also make use of digitization and the Internet. Through the use of their institutional repositories, the items being kept within can be digitized and published over the World Wide Web. As more institutions devoted to collecting, organizing, and maintaining sources of information obtain and incorporate digitization and the Internet (as well as the other latest technological breakthroughs and trends), so long as the people remain engaged with their gadgets, information can become more and more available and much easier to access for the general public.

As noted by Christine L. Borgman in her book “Scholarship in the Digital Age: Information, Infrastructure, and the Internet,” the notion of a “digital library” had been dismissed at first. According to the skeptics, the concepts of the library and digitization were incompatible, i.e. “if a library is a library, it is not digital; if a library is digital, it is not a library.” As the times were passing by, science has once again proved the skeptics wrong. Modern-day technology has clearly demonstrated that the format of the library did not have to remain in the confines of what has traditionally been defined as such. As long as an entity is taking on the responsibility of collecting, organizing, and maintaining different sources of information (conventionally from various fields), it can still be considered by a technicality as a library, whether it is in a physical or digital format. The flexibility of this concept should also apply to the archival and academic communities, on account that the items consisting of their collections can also exist in their physical and digital formats. Because universities often maintain libraries and archives within their institutions, through the incorporation of digitization and the Internet, the circumstances render them as the most benevolent of contributors to the general public. However, their generosity does not have to stop there, or at least not within those specific areas. Since universities also preserve the researches of the scholars who had contributed to their institutions, the utilization of the technologies also enables them to quickly publish their works and make the materials readily available via the Internet; thus allowing even more sources of information to become accessible to the general public.

Wednesday, November 3, 2010

Week 11: Web Search and OAI Protocol

Because people have the means to create their own websites, just about anyone can publish their material over the Internet. As more people are obtaining the ability to do so, sources of information can easily get lost in the shuffle. That is why there are search engines to help establish order. The device utilizes an algorithm that calculates how often a website is associated with key words, which is based upon the amount of visitors it receives. As a result, the most popular websites in every category end up reaching at the top of the list when the search begins. However, just because a website happens to be more popular, it does not necessarily mean the content would be accurate. This was when Metadata had been introduced, so that the websites can be harvested in a more meticulous manner; thus concentrating more on the quality of the content, rather than the quantity of the visitors. However, the approach is bound to be met with disagreement, or even hostility, on account that it could encourage some form of elitism. Technically speaking, Metadata is capable of providing for a select few the power to determine which websites should be rendered as superior to others, while the opinions of many are being ignored. With or without the Metadata, the search results being presented ultimately scratches the surface at most. This is why there was a need for another technology that looks much deeper within the content of the website, providing greater accuracy and efficiency from the search results. The use of the Deep Web achieves this goal through a compromise of the core attributes between the popular approach of the former and the selective approach of the latter.

Regardless as to what technology will become available, what will frequently become the case is that a minority ends up overpowering the majority. The only significant difference is whether the shared opinion of a populace or an agreed-upon decision by the elite becomes the determining factor. Whichever might prevail gets to choose their minority that is to overshadow the majority. There are always possibilities that tensions could arise between the two sides, but not all the time. Sometimes the popular choice and the right choice can in fact be one and the same, and it is instances of these mutual agreements and understandings that Michael K. Bergman’s proposition attempts to exploit. Even if only the most genuine of all websites manage to successfully become the minority, the situation becomes increasingly difficult for the majority. Although there is the relief of knowing that websites providing nothing but junk are more likely to be thrown deeper into the depths, websites of better quality are still going to potentially be ignored. A website can provide information just as professionally as any scholarly source could, but there is still no guarantee if it will achieve higher recognition. One factor that the latest innovation explained by Bergman would ignore, which might also apply to future successors, is human nature. So long as people want quick results and can get bored easily, whatever it is that any individual chooses to look into, chances are that the person will only glance at the top ten results at most and will only bother looking thoroughly through one of them (if at all). Unless a website has made an impact on the populace and the elite, it would be lucky to reach even the top twenty.

Wednesday, October 27, 2010

Muddiest Point

I had a problem with the Ogbuji article, as well as the XML Schema tutorial. To my understanding, the objective of this week was to have a general idea about XML. When I came across the Ogbuji article, I felt like it was quite a jump compared to the Bryan and Bergholz article. I did not know as to whether I was supposed to learn about all those “standards” for the sake of fulfilling my agenda during that particular unit. The moment I saw the tutorial from W3Schools, I assumed that of all the “standards” on Ogbuji’s list, the one on XML Schema was probably the most important. I could assume from there that after learning about what purpose XML basically serves, I was supposed to know what sort wonders it is capable achieving, then going more into details into one of those wonders. Putting that factor into consideration, perhaps that one “standard” was simply picked at random, and maybe the notion of one being more important or useful than the other could be based more on the situation being confronted. If randomness was the case, perhaps for reasons of fairness it best not to choose one over another at all. Whatever it may be, I do know this for certain: As long as the objective of this unit was only to have a general idea about XML, I believe that the articles from Bryan and Bergholz alone would have sufficed.

Week 9: Comments

Comment #1: http://adamdblog.blogspot.com/2010/10/unit-9-reading-notes.html

Comment #2: http://jobeths2600blog.blogspot.com/2010/10/unit-9-readings.html

Week 9: XML

The creation of the website is one method of distributing information, which can be accomplished by having a basic understanding of HTML. Although the language is the essence of the website, the coding can be extremely tedious to execute. In need of simplification without the risk of sacrificing the quality, the introduction of CSS has demonstrated that such a transition was possible. However, there has been yet another need to be spared from trouble without cutting corners, which in turn led to the creation of the Extensible Markup Language (or XML). The objective of XML is to further simplify the process of distributing information. Because the language utilizes a process where documents can be exchanged, information can be made available via files instead of going through the hassle of publishing the entire content through a website, which in itself takes up a lot of time and space, and makes the source available only in one format. In order for users to learn how to achieve such simplification, they would need to learn the language. While Martin Bryan was able to introduce the language to the readers, André Bergholz provides an opportunity to go a little more into details, since the latter demonstrates XML being put into use by showing examples of the coding. Although both authors do their part in giving a general idea about the language, what they present are mere samples compared to what else XML has to offer. Uche Ogbuji has shown that because of the potential XML was capable of wielding, it was able to inspire the creation of other technologies that also made use of the language, with one of them including the W3C XML Schema. Such a breakthrough was able to simplify the process of handling XML files even further by breaking down and organizing documents of the language in a manner that can recognize authenticity and transfer content more easily. Of course, this goal cannot be achieved without having a better understanding of this program as well, which is what the availability of the “XML Schema Tutorial” from W3Schools tries to accomplish. The tutorial tries to explain as much as it possible can in a very well-detailed manner, and not to mention it is just as well-organized to help provide reference.

There is a French expression that goes “Plus ça change, plus c'est la même chose,” which is often translated as “The more things change, the more they stay the same.” Such a proverb would especially apply to this case. The objective behind the creation of the website is to enable wider availability for sources of information to be accessed; therefore easier to retrieve. Yet in order to know how information could be published in such a manner, people first had to know how to communicate via HTML. The language obviously had its difficulties, especially since there was so much to memorize and repeat. That was why CSS was invented in an attempt to relieve where the former language imposes stress. Yet in order to achieve that goal, there had to be mastery of the new language as well. Of course, if there were certain aspects about HTML that could be simplified, then a similar case should apply to CSS, which is where XML would come. Yet in order to know how CSS can be further simplified, which in itself tries to simplify HTML, one needs to achieve mastery of XML. Because of what XML was capable of achieving, other technologies were created based on that language, which were intended to simplify certain aspects of it even further, with W3C XML Schema as one example. Yet in order to understand how the language was intended to simplify certain aspects of XML, which in itself simplifies certain aspects of CSS, which in itself simplifies certain aspects of HTML, one needs to master W3C XML Schema. Under the assumption it has not happened already (and I am sure it has), it will only be a matter of time the latest one will go through a similar process. Even when one obtains full mastery of one language, there is still the importance of learning about the predecessors. In case a problem occurs where the simplified version cannot notice, it is by having knowledge of the most essential aspect of the infrastructure (in this case, HTML), that the root cause can be identified and repaired. And because there were so many layers implied intended to simplify everything, such tasks become all the more difficult to straighten when they get out of control. This is the whole irony of the situation, and it only gets more ironic.

Saturday, October 23, 2010

Muddiest Point

If there was any article that I felt was unnecessary to mention, then it was probably the “HTML Cheatsheet.” I am not saying that it had failed to serve a useful purpose. However, if the objective of the “HTML Tutorial” from “W3schools.com” was to give the user a general idea as of how HTML works, then I believe that one source alone could have sufficed. By adding that extra article, I felt as though it was getting a little, extra attention. In my opinion, this becomes somewhat unfair for the other subjects in the week’s topic. If the topic on HTML deserves mention of another source via a “cheat-sheet,” then it would seem logical that CSS should receive the same kind of treatment. Even though the objective of the latter is to simplify where the former appears complicated in one sense or another, I am sure that CSS may have similar issues of its own in terms of trying to memorize so much about the language. People are as much at risk to forget about certain aspects of CSS as HTML, hence why the other language should have a “cheat-sheet” of its own and readily available as well. I just figured that this issue was worth mentioning only for the sake of fairness and personal convenience.

Week 8: Comments

Comment #1: http://acovel.blogspot.com/2010/10/week-8-reading-notes.html

Comment #2: http://adamdblog.blogspot.com/2010/10/unit-8-reading-notes.html

Week 8: HTML and Web Authoring Software

When people have the intelligence to design their own website, they are also given the means organize the information in any manner in which each individual feels most comfortable maneuvering before the launch. The essence of the website is the Hyper-Text Markup Language (HTML) and to achieve a better understanding of the language is to achieve a better understanding on how to express one’s self via the design of the website. For those who do not have the knowledge to do so, there are websites that allow users to obtain such an opportunity. The “HTML Tutorial” from “W3schools.com” manages to give users a detailed, step-by-step approach on how to comprehend the language, which can serve as a perfectly good source of reference for starters. As for those who have achieved a little beyond the beginner level, the “HTML Cheatsheet” from “Webmonkey” can always come in handy whenever the more experience could use a quick reminder. However, regardless as to how well anyone may be able to master the language, no one could ever deny the tediousness of HTML, with such difficulties including certain lines of code being repeated constantly throughout the data. For the sake of lessening the burden and saving more on resources, Cascading Style Sheets (CSS) had been invented. In order for other people’s lives to be easier, it would help for them to have a better understanding of the language, which is the goal that the availability of the “CSS Tutorial” from “W3schools.com” attempts to achieve. Even if someone is able to achieve full mastery of this language as well, if there is to be a collaborative effort on how someone wants information to be organized via the design of a website, there would be communication problems between those who know what they want, but do know how to express it, and those who know how to express it, but do not know what they want. That is why there is the Content Management System (CMS) to create compromise. Those who are more familiar with the goals are able to cooperate better with those who are more familiar with the language, and vice-versa; thus eliminating the frustrations as the two groups collaborate on their project, which was the case for the library liaisons and the web development personnel at the Georgia State University Library.

The situation with these tutorials reminds me of a segment from “Phaedrus,” which is one of Plato’s works. Thoth, an Egyptian Deity, was having a debate with King Thamuz. The God had insisted that his introduction of the writing system to the human race enables information to be recorded; therefore better preserved. The King was pessimistic, claiming that the invention would actually do human beings a disservice, because they are relying less upon memory; therefore neglecting their mental capacity. In parallel with this scene was a debate Plato had with his teacher, Socrates. Socrates was more sympathetic with the King, whereas Plato with the God. The teacher had claimed that the written word can never substitute the spoken word. Although information is being preserved, what the source is able to provide is only confined to whatever has been recorded. When there are any details that have not been made clear and the readers have more questions, words on paper simply cannot respond to them, and the author cannot always be present to explain everything. The phenomenon clearly required an even greater need for human interaction. The availability of those tutorials obviously works in a similar manner. Even though users have an opportunity to learn how to create on their own, the information that is given cannot always be sufficient. Details can be prone to misinterpretation or there could be factors that have yet to be covered. Whatever the case may be, because the user/reader does not have a human contact directly available to provide some sort of guidance, the individual is pretty much stranded in the middle of nowhere. It was probably from realizing the problems these sorts of situations tend to impose that inspired the creation and utilization of the CMS. An important thought to bear in mind from the widespread use of differing methods to communicate is that instead of one substituting another, each should actually be complimenting the other, and none of them could ever be fully replaced by the next.

Thursday, October 14, 2010

Muddiest Point

I think the “Hand on Topic” we were given was the weakest piece of material for the week. I understand the concept of the activity and I do appreciate the lesson it is trying to teach, but I honestly do not believe that I needed to go through those extra steps in order to answer the question at the end. I already knew from personal and professional experiences outside of that activity how to answer that question. I could have easily provided something on the spot and posted it on my blog right away. Instead, for the sake of fulfilling requirements of the homework, I decided to cooperate. This meant going through the time and trouble of not only trying to think up of something worth asking about, but also waiting for someone at the other end to even bother responding. Is it really worth delaying to complete something I could have easily finished in an instant? I simply get irritated whenever I have to endure hassle over matters that are in fact much simpler in nature than they tend to appear. This may be just my imagination talking, but for me, all it takes is some kind of a random technicality or any other circumstance beyond my control to automatically impose unnecessary obstacles, or even difficulties, which prevent me from doing something as a mundane as going from Point A to Point B.

Week 7: Comments

Comment #1: http://skdhuth.blogspot.com/2010/10/week-7-notes.html

Comment #2: http://jsslis2600.blogspot.com/2010/10/week-7-reading-notes.html

Pitt's Virtual Reference

University of Pittsburgh provides virtual reference ask-a-librarian via http://www.library.pitt.edu/reference/. Please choose either the IM version or the email version of the service, and ask a reference question that you are interested to get an answer on.

Based on this experience and any previous experiences of face to face reference, think of the advantages or limitations of this virtual reference. I have created a discussion thread in the discussion board for any discussion about this.

I decided to contact the “Ask-a-Librarian” virtual reference system via e-mail, and this was the question I submitted:

“In regards to the "Instant Virtual Extranet," let me first say how much I appreciate your services, since it always helped me with my school work. However, I do have one concern: Has this library system or campus ever devised a solution for prolonging the connection time? Whenever I access an article through that service and take the time to read, I often find myself in a situation where I need to go through the whole connection process all over again each time I want to go on to the next article. I am not trying to hold anything against anyone. I just want to know if I simply have to deal with it. I am only asking out of curiosity's sake. I hope to hear from you soon.”

By the next day, this was the reply I was given in return:

“Hello Arek Toros Torosian

Thank you for writing to our Ask-a-Librarian service.

We appreciate your taking the time to send us your comments. I'll forward your email to our Web Services Librarian.

In the meantime, I'm wondering if you've considered downloading the articles to your desktop or to a flashdrive once you've located them.  This would allow you to take your time in reading articles without worrying about the amount of time that you take. When you finish reading the articles, you could delete them.”

I need not to wait for the response from the Web Services Librarian to be able to explain the differences between this experience and face-to-face interaction. One major advantage an e-mail has over the more direct, one-on-one method in regards to approaching those who work in reference is that the patrons have the opportunities to carefully compose and double-check what they want to say before bringing it to the other person’s attention. For the case of the face-to-face approach, the patrons need to know how to express themselves clearly on the spot if they want the person at reference to understand and address their concerns. However, I am not trying to suggest that one method is superior to the other. One major advantage that face-to-face interaction has is that the patrons are in a situation where those working in reference are easily able to retrieve the sources to directly present the solutions to the concerns being brought up. For the case of the e-mail, there is always a possibility that the patrons cannot express themselves clear enough, which leaves those working in reference to respond with the wrong solutions (assuming they are able to provide anything in return). This in turn leaves the two individuals in a situation where they are exchanging messages back in fourth until they are able to finally narrow down on the main issue. In the end, whichever is more reliable for presenting concerns, whether by e-mail or one-on-one interaction, would have to depend on what method the individual feels more comfortable using (and to each one’s own).

Tuesday, October 12, 2010

Week 7: Internet and WWW Technologies

One of the latest breakthroughs in technology that enabled us to live in the society we have today is our entry into cyberspace. However, the opening of the gateway by itself did not really create those wonders. In order to for that sort of space to be put into better use, connections needed to be established between separate locations. This goal could not be achieved without building an infrastructure that would allow more links to be assembled with each other, eventually leading to the invention of the International Network, or the Internet. Although there are more possibilities of connections being established, the linking by itself never simplifies the process of actually reaching towards those locations. When the staffs at the libraries were first introduced to the Internet, they knew that the system currently in use, the Integrated Library System (ILS), had to be fully replaced at some point. The incorporation of the Internet never made anyone’s job at the libraries any easier, mostly because the staff was too accustomed to the older model and the transition seemed like it was too much too soon. That is why there was interoperability to establish a compromise. As the staff was learning to utilize the newer models for conducting their work, the former continued to be implemented as a means to provide a sense of guidance. At this gradual pace, the staff members were learning to be more accepting of using the Internet, as they were also becoming less dependent on the ILS, with predictions it could finally be dismantled without any regard in the long-run. And yet if there was anything that had tremendously simplified the process of reaching the destinations or locating the sources we wish to seek, it was the introduction of the search engine, with Google proving itself as a prolific example. Whatever it is that people are looking for, they have a better chance of obtaining it just by typing in a few words. Because of how the algorithm was set up for the technology, the results being presented are based on how often most users tend to associate the websites with the key words. The most popular websites end up being rendered as the most relevant to the search, which in turn allows those results to be the first being recognized by the user. Although what Google had presented is not perfect, as long as it continues to give the people the kind of quick and genuine results they want (and updates the means to do so as well), there is a always a lesser and lesser likelihood anyone would ever turn away from such service.

Regardless as to what sort of breakthroughs modern-day technology is able to provide for mankind, one of the most noticeable flaws always prevailing in each reoccurring transition is that items continue to remain lost in the shuffle. In reference to “Linked: How Everything is Connected to Everything Else and What it Means for Business, Science, and Everyday Life” by Albert-László Barabási, the sources by Jeff Tyson, Andrew K. Pace, and Sergey Brin and Larry Page have demonstrated the evolution of the connection in their respective order. Everything begins with a vast empty space waiting to be filled. The prevalence of nodes fulfills that purpose, but then there is the issue of trying to establish order. The nodes straighten themselves by creating links with each other. A network is established, but the situation still seems like a mess, because there are links going all over the place. The issue regarding the links can be straightened out through the establishment of hubs. Once the hubs are established, more nodes are able to know right away where to establish their links. However, just because order has been established within the network, it does not necessarily mean the network has been perfected. The first nodes that make their way into the empty space have more opportunities to hone and refine themselves. By the time other nodes also make their way in, the older ones have already enhanced themselves enough to attract more attention. This in turn allows the older nodes to gather more nodes around, thus establishing more links with them and converting themselves into hubs. The nature of such a network seems rather unfair for the nodes that come in too early or too late. If they come in at the same time as some of the older ones and are unable to make the same kind of preparations when a new wave of nodes arrive, then chances are they will be overshadowed by the competitors and ignored by the followers they manage garner. When a new node comes in, there is a chance it will also be ignored. If it manages to achieve some recognition, it will immediately come under the wing of a well-recognized hub. The possibilities for any of the newcomers to become hubs themselves seem rather slim, so long as the old timers have the strongest foundations and are able to overpower the competitors with greater ease. The bottom line is so many nodes pretty much end up being lost in the shuffle, simply because they were never able to establish as many links so successfully. Without those connections, very few, if any, people will ever get an opportunity to witness their potential. Of course, that is under the assumption any of those nodes within the majority might even have any.

Tuesday, October 5, 2010

Muddiest Point

I was feeling conflicted between the Wikipedia article on the “Computer Network” and the YouTube video on “Common Types of Computer Networks.” According to Frank J. Klein, the so-called “common types” happen to be the Personal Area Network (PAN), Local Area Network (LAN), Wide Area Network (WAN), Campus Area Network (CAN) and Metropolitan Area Network (MAN). When I look into the Wikipedia, I notice that the list of the different types of computer networks happen to be longer. Putting these circumstances into consideration, I do not know as to whether Klein may have forgotten a few other “common types” that the Wikipedia article took the time to mention or the article was trying to be as information as possible, while the video was simply telling to viewers what they needed to know for starters. For the sake of avoiding overanalyzing the situation with the “Computer Network” article on Wikipedia, I decided to use only the details that related to what Klein had been mentioning in the video. On the grounds that there was probably so much more information I potentially ended up ignoring, I might as well conclude that the “Computer Network” article was probably the least useful of all the readings; therefore the weakest piece of material.

Week 6: Comments

Comment #1: http://mfarina.blogspot.com/2010/10/reading-notes-for-week-6-m-farina.html

Comment #2: http://rjs2600.blogspot.com/2010/10/readings-for-10-11-10-15.html

Monday, October 4, 2010

RFID and Libraries

See this link: http://att16.blogspot.com/2010/08/rfid-and-libraries.html

Week 6: Computer Networks, Wireless Networks

Because the computer is capable of preserving digitally so much information, it would seem logical to develop a means to transfer the content from one source onto the next. Although the invention of the disk had managed to accomplish this task, human nature would once again become dissatisfied in the long run as usual. There was obviously a need to transfer information with much greater efficiency, i.e. simpler and quicker. With the introduction of the Internet, computers were able to create networks; thus establishing the means that would provide such a solution (for the time being, of course). A few types of commonly used computer networks to name are the Personal Area Network (PAN), Local Area Network (LAN), Wide Area Network (WAN), Campus Area Network (CAN; utilizing interconnecting LANs) and the Metropolitan Area Network (MAN; similar to the CAN, only the WAN is incorporated as well) among others. As networks had been developing and catering to more and more individuals, the results in their respective order were ranging for the amounts they serve, from a single person, to a group of people, and to an entire population. Because the library is a system that functions like any other organization, such as a business office, it would seem suitable and logical for a staff to attend their duties via a LAN, which is the sort of network designed for that sort of a setting. Since the library is often recognized as a powerhouse for sources of information, the utilization of such a network should enable for the system more efficient means to organize their materials. However, there will always be circumstances beyond the control of the networks. For example, the computer can always claim that a certain item is in their possession at a certain location, and yet it cannot be found within the system, or at least not in the particular area to which it was pointed earlier. This is why there is the option of incorporating the Radio Frequency Identifier (RFID) into libraries. How the technology works is that the items are tagged with a computer chip and are retrieved with an antenna. Through the use of this innovation, books that end up lost in the shuffle or just about anywhere outside at random (perhaps misplaced by a patron or even stolen by a thief) now have a greater chance of being found by the library.

If I have not already done so more openly, I would like to make a reference to “The World is Flat: A Brief History of the Twenty-First Century” by Thomas L. Friedman. What I am to explain is based what I could recollect from my readings since my senior year as an undergrad. How the book got its title came from the author’s comparison between the increase of networking and the establishment of the playing field. In order for a field to be rendered as playable, the area needs to be flattened. Once it has been flattened, the field becomes an invitation for players and people who want to be involved in the game are always welcome to play. A network functions in a similar manner. Once it has been established, those who wish to be involved are always invited to do so. As more people wanted to get involved with the phenomenon, the network needed to expand so as to make sure they received their opportunities as well. The situation appeared as though the network was becoming one big game and the whole world just wanted to play. In order for people all over the earth to be able to play the game that has reached across the globe, the world needed to be “flattened.” Compared to the Wikipedia articles on the “Local Area Network” and “Computer Network,” and the Frank J. Klein video on “Common Types of Computer Networks,” what Friedman had presented does indeed seem like a promising future ahead of us. Putting the Karen Coyle article on “Management of RFID in Libraries” into consideration, Friedman is beginning to appear more naïve and delusional, since there is a potential dark side he may have overlooked. No one can ever deny what these networks are capable of accomplishing, but the possible dilemma that the RFID is capable of imposing could be a real turn off. Apart from the ethical issues, such as privacy, that the technology has been raising, financial matters also seem to be a concern. Whether or not Friedman’s dream of a “flat world” is possible, it is quite clear that such an achievement would take tremendous time, effort, and money. Even when other people are invited to and participating in the playing field, there is no guarantee that they will enjoy the game (or at least not on their terms); hence the expression: “You can lead a horse to water, but you cannot make it drink.”

Monday, September 27, 2010

Muddiest Point

As I looked through the required readings, I had noticed that there was one article on databases, one on metadata, and one on an example of metadata being put into use. Putting those circumstances into consideration, I think it would seem reasonable that there should have also been an article on one type of a database. Of course, I am aware that the article of the Dublin Core Data Model managed to cover the issue. However, I still believe there should have been one example of popular database program being explained so that we as the readers can get better acquainted with the concept of what a typical database in current use is supposed to resemble. It may even help to have other articles on different examples of database programs. The explanations that they all provide should allow the readers opportunities to make comparisons. As soon as the readers reach to the article about the Dublin Core Data Model, they are able to witness as to whether such a project can wield the potential of being universally embraced. Otherwise, if we are simply to read the article offhand, whatever promises it claims to deliver, we are in a situation where we can only accept the author’s word.

Week 5: Comments

Comment #1: http://jsslis2600.blogspot.com/2010/09/week-4-reading-notes.html

Comment #2: http://pratt2600.blogspot.com/2010/09/unit-5-reading-notes.html

Week 5: Information Organization by Database, Metadata

See this link: http://att16.blogspot.com/2010/09/week-4-information-organization-by.html

Assignment 3

Part I: Jing Video

Video: http://www.screencast.com/t/M2U4YzI1Mz

In this video, I am demonstrating how to create a greeting card. The program I have used is "Print Artist: Version 23." Viewers need not to repeat the same, exact steps as I did in the recording, but can always turn to this source as an example to follow. I hope what has been presented will be of service to those who are watching.

Part II: Jing Screen Capture Images

Image 1: http://www.flickr.com/photos/54018848@N07/5030634473/

Image 2: http://www.flickr.com/photos/54018848@N07/5030633515/

Image 3: http://www.flickr.com/photos/54018848@N07/5031241652/

Image 4: http://www.flickr.com/photos/54018848@N07/5030623185/

Image 5: http://www.flickr.com/photos/54018848@N07/5031238338/

Image 6: http://www.flickr.com/photos/54018848@N07/5030620059/

Image 7: http://www.flickr.com/photos/54018848@N07/5030618939/

Image 8: http://www.flickr.com/photos/54018848@N07/5031234590/

Image 9: http://www.flickr.com/photos/54018848@N07/5030616709/

Image 10: http://www.flickr.com/photos/54018848@N07/5031232570/

Image 11: http://www.flickr.com/photos/54018848@N07/5031231388/

Image 12: http://www.flickr.com/photos/54018848@N07/5031229316/

Image 13: http://www.flickr.com/photos/54018848@N07/5030611127/

Image 14: http://www.flickr.com/photos/54018848@N07/5030610117/

Image 15: http://www.flickr.com/photos/54018848@N07/5030608905/

Image 16: http://www.flickr.com/photos/54018848@N07/5030607985/

Image 17: http://www.flickr.com/photos/54018848@N07/5031223430/

Image 18: http://www.flickr.com/photos/54018848@N07/5030605369/

Image 19: http://www.flickr.com/photos/54018848@N07/5030603671/

Image 20: http://www.flickr.com/photos/54018848@N07/5030602115/

Image 21: http://www.flickr.com/photos/54018848@N07/5030601033/

Image 22: http://www.flickr.com/photos/54018848@N07/5030599769/

Image 23: http://www.flickr.com/photos/54018848@N07/5031215248/

Image 24: http://www.flickr.com/photos/54018848@N07/5030597107/

Image 25: http://www.flickr.com/photos/54018848@N07/5030592917/

Thursday, September 23, 2010

Muddiest Point

I sort of felt conflicted between the Wikipedia article on “Data Compression” and the DVD-HQ article on “Data Compression Basics.” Because both sources specifically mentioned about “lossy” and “lossless” formats in regards to this tool, I knew right away I had to focus my attention on that concept. Of course, this saved me a lot of trouble in terms of where to look, only to realize the path I took had another barrier of vagueness to confront. I later found myself in a situation where I was constantly looking back and forth between the articles, just for the sake of making sure I was interpreting those terminologies correctly. I started having a strange feeling the two sources felt like one and the same article. Whatever description I was able to conjure, I had to make it was not only brief enough to be compacted into a nutshell, but also able to establish a connection with the other two articles. Otherwise, I would have ended up needing to rewrite everything I had already provided, as well as to reread all the articles, in hopes of having better luck of trying to find another way all the required readings could possibly be related.

Week 4: Comments

Comment #1: http://guybrariantim.blogspot.com/2010/09/week-4-readings.html

Comment #2: http://pittlis2600.blogspot.com/2010/09/week-four-reading-notes.html

Wednesday, September 22, 2010

Week 4: Multimedia Representation and Storage

The benefit of being able to compress digital materials is that extra room can always be made for incoming storage. How much space can be made available often depends on the size of the compressed files put together and the amount of storage the hard-drive can handle. Nevertheless, by decreasing the size of the materials, more room is able to be made. However, the process is not exactly straightforward. People can always debate with themselves as to whether data should be compressed in a “lossy” or “lossless” manner. For the case of the former, although the data is compressed to a size smaller than to what the other does, it ends up disposing a bit more data. As for the latter, although it may take up a bit more room in comparison, it is more likely to keep the data intact. Which version is to be considered as suitable would have to depend on whether the individual is more concerned about the amount of storage available or the quality of the data. This sort of situation had to have placed the University of Pittsburgh’s Digital Research Library into a dilemma after the Institute of Museum and Library Services provided a two-year grant. The staff needs to think about not only preservation issues via digitization, but also financial. There is more to consider as to whether it is better to settle for a format that can save more space, yet ruin the quality, or vice versa. What also needs to be put into consideration is which one is cheaper. Fortunately, Paula L. Webb was able to explain that a compromise can be possible. If libraries can bring such technologies as what YouTube.com has to offer to their advantage, then they are able to create so much more room for making so much more material available and accessible without spending as much money or considering the necessity of sacrificing the quality of the data. By pursuing such an opportunity, infinite space can be made to store and organize sources of information at its best quality for free.

Although the compression of data certainly has its potential, just like any other technological breakthrough, it is important to never give in all at once. I can vouch for this based on a recent experience that is still posing as a problem on me. Since the very beginning of my studies at this master’s program, I cherished the information I was being given about this profession and I knew I was going to need it all for my career. Unfortunately, I also knew I was bound by timing constraints, which means I have been unable to savor and analyze the knowledge the way I had wanted. That is why I was taking the precaution of saving all the work, including my homework, reading materials, and video lectures. I later learned that there was too little space left on my hard-drive. Because the next semester, which is the one I am in right now, was coming up, I wanted to resort to some kind of a quick fix to organize and save space. I basically compressed the files, placed them in compressed files, and into another. I made some room, but it did not really make much of a difference. Now, whenever I try to access those files, I am simply barred out. The only way it is possible is if I empty my hard-drive of material I will not need. Since I am too busy with my school work, I simply do not have the time at the moment to look through my entire computer and carefully select what I should render as garbage. As a consequence, I am currently unable to look back at my work whenever I need reinvestigate certain issues of which I am reminded. If there is a valuable lesson to learn from this experience, it is to always have a clear understanding as of how a technology works before attempting anything. When it comes to compression, it is always important to preserve the original versions of the sources so in case anything goes wrong, a back-up will be available for an emergency. Otherwise, you will just have to end up dealing with data that is serving no purpose.

Tuesday, September 21, 2010

Week 5: Information Organization by Database, Metadata

Whenever information needs to be gathered, its earliest form often comes as raw data. Although there are a lot of interesting facts to come across, what is being presented appears as nothing more than a jumble serving no purpose. The situation begins to change with the introduction of the database. The objective behind the database is to order the data in a manner that relates to the information it tries to provide. The facts become a lot more comprehensible via organization, but the database alone is not the only tool capable of accomplishing such a task. Metadata also has the ability to organize data in its own unique way. The data can be managed by the features with which it is associated, i.e. the “content, context, and structure” is serving as the “data about data” that can help get things into better order. One example of metadata being put into good use is Dublin Core Data Model. What the model attempts to achieve is to establish a means that can be universally embraced amongst various professions for categorizing and organizing materials. Although it is still under development, even if the finished product turns out to be far from perfect, it can always serve as a good start in the right direction.

The issues concerning metadata, as well as databases, remind me of a fable attributed to Aesop entitled, “The Man, the Boy, and the Donkey” (Be sure to go to the following link to have a better understanding of what I am explaining: http://mythfolklore.net/aesopica/jacobs/62.htm). When it comes to organizing data, the database needs to be structured in a manner that anyone can comprehend. If it reaches to a situation when one too many people are having difficulty trying to figure out the design, then the database would need to be reconfigured or replaced. The introduction of metadata seemed like the simplified solution that people always wanted to have. However, when the technology was incorporated into the Internet, leading to the creation of the meta-tag, people were quick to express discomfort. Because sites can potentially be labeled, the scenario should be self-explanatory as of why such a reaction had been given. In regards to what the Dublin Core Data Model is trying to accomplish, as much as I want the staff that is developing it to succeed, I still believe the model will be met with a lot of disappointment. As the fable I posted tried to explain: “Please all and you will please none.”

Friday, September 17, 2010

Week 3: Comments

Comment #1: http://jsslis2600.blogspot.com/2010/09/week-3-reading-notes.html#comments

Comment #2: http://pittlis2600.blogspot.com/2010/09/week-three-reading-notes.html#comments

Week 2: Comments

Comment #1: http://jsslis2600.blogspot.com/2010/08/week-2-discussion-topics-notes.html#comments

Comment #2: http://pittlis2600.blogspot.com/2010/09/week-two-reading-notes.html#comments

Week 1: Comments

Comment #1: http://jsslis2600.blogspot.com/2010/08/introduction-and-week-1-readings.html#comments

Comment #2: http://pittlis2600.blogspot.com/2010/08/week-one-reading-notes.html#comments

Assignment 2

My Flickr account image collection: http://www.flickr.com/photos/54018848@N07/

Muddiest Point

If I had to choose what I felt was too vague about this week’s topic, I would have to say it was the readings themselves. Of course, I am well aware that they were talking about computer software, but what is it in particular was I supposed to know about the topic? I did not feel this way about the Paul Thurott article on Windows, since it gave a more straightforward approach. As for the Machtelt Garrels reading on Linux and what was available on Mac OS X, I felt as though the information was all over the place. I wasted so much time trying to figure out in what way the readings were supposed to be connected and how it could all be summarized. To me, information on computer software can be reworded, but it cannot be summarized. The facts are simply taken as they are being presented. It is merely a matter of wording it in a manner that the other person can comprehend. It was not until I discovered about the three computer software systems having a shared history that I finally had something to analyze and worth writing. If I want to have a general idea about a certain computer software system, of course I will look into any source that will be available to me. But if there is any issue in particular about a computer software system that I need to know about, I would like to have a more concentrated article.

Google Desktop

After you have fun, you can think of a question. What does this tell us about the future of library and librarian?

I will need to make a reference to David Weinberger’s theory on “The Three Orders of Order” to answer this question. Of course, a library must fulfill its duty by gathering sources of information, and then attempting to organize them. It is logical to assume that the more a collection will increase, the more difficult it will become to organize it. This also means greater possibilities for desired items being lost in the shuffle, therefore irretrievable. What contributes to the difficulties is never really the increase of the amounts in itself, but rather there are so many books that can belong to so many different fields of study. Whatever model needs to be used for organizing the books, it needs to be carried out all the way. The introduction of the Dewey Decimal System managed to solve the problem, but the man who invented it (and he was a very eccentric man indeed) had been corrupted by his Christian bias and Western Eurocentric views. Just about any book that concentrates on fields outside of those spheres would obviously have some kind of difficulty making its way into a collection that utilizes this model. Despite the imperfections, including some of its discriminating features, libraries still use this system on the grounds that it gets the job done at the least. Regardless as of how the actual books have been arranged, computer software systems offer so much flexibility in terms of organization. The availability of books within the library can be shown just by typing any word in the search engine. When the algorithm used by the search engine is unable to display the results that the patron had been seeking, the search can be narrowed down even further through certain filters, allowing users to seek an item by the author, title, date, publication, genre, etc. Through the use of folksonomy, people are able to exercise their own methods as of how they identify items by tagging them. Although these features are giving the patrons better chances of finding what they need, whatever has been invented and introduced shall always remain far from perfect.

What patrons need to realize is that librarians cannot do everything for them. Just because librarians spend a lot of time surrounded by books, it does not necessarily mean they actually take the time to read them all. It is never as though a patron can just provide a description of a book with just a few details and expect a librarian to know automatically what the person is talking about and then immediately retrieve it. Of course a librarian needs to know how the library’s organizational system works, be it the Dewey Decimal System or the Library of Congress System. As the libraries incorporate newer technologies for their services, it is just as vital for the staff members to know how to use the equipment as well. Whatever resources the librarians are able to utilize, the most they are really able to do is narrow down the search. Without the guarantee the item will actually be found all the time, the patrons need to take it from there. The patrons also need to realize that since the librarians, who are not perfect, have organized the materials based on models that are not perfect either, a particular item may not always be found in locations they anticipate. For example, there is a book on French cuisine. The librarians would have to decided as to whether it should belong in a section devoted to French culture or cooking, seeing as how it covers both fields. The patrons can debate all they want as to what section it should have belonged, but it in the end, this is the decision the librarian made for the sake of getting the job done. At least the computer system is more capable of reaching a compromise. However, even if the computer system is able to give an exact location as of where the item is located, it does not necessarily mean it will be found. Because most patrons never take the time to familiarize themselves with any organizational system, they have a tendency to throw books into some of the most random places all over the library. Regardless as to what sort of solutions are being provided in the future, whether they are intended for the physical or digital formats, something will always get lost in the shuffle in one form or another. Regardless as to how efficient the system becomes, the patrons will always have certain issues to bring up to the librarians. Then again, it is because of those complaints that technologies evolve.

Thursday, September 16, 2010

Week 3: Computer Software

As computer hardware had been evolving, so did the means to operate the equipment, which in turn inspired the creation of computer software. What inspired the creation of the computer software were people’s needs to operate on a system that is much smaller and more sophisticated in appearance, and simpler and reusable for using. This was the goal that UNIX had been able to achieve, which eventually became Linux by maintaining the model. As effective as it may have been, it was certainly not perfect, as what Steve Jobs was able to demonstrate. During the 1980s, Apple introduced a newer version of the computer system, which actually borrowed certain elements from UNIX, while incorporating some of their own ideas. The renovating of the system became another model of its own, which later led to the creation of Mac OS X. What Bill Gates had conjured, which later led to the creation of Windows XP, Windows Vista, and Windows 7, had certainly been an accomplishment that was not itself very unique either. In the end, the computer software products that we have are nothing more than what originally began as variations of the same system, eventually prospering in their own directions. Regardless as to how much one is differentiating from the other, the core elements seem to be remaining.

The parallel between the evolution of computer hardware and computer software lies in people’s demands for tools that are more presentable and easier to utilize. At first, such technologies used to be solely available to the military. However, big companies were able to obtain them first for their own use. Seeing as how they have the money, it seems logical that the best materials are often given to the highest bidders. Much like how the U.S. Government constantly felt a need to update its equipment for the sake trying to remain a step ahead of the Soviet counterpart, businesses are driven by the same urge when it comes to outdoing the competition. Each competitor tries to gain an advantage by looking for the flaws that the products have and then trying to improve on it by presenting a much better version of the previous. However, if there is anyone who is good at finding any short-comings, it is the consumers themselves. Because people by nature can never be satisfied, they will always look for any excuse to complain, and they will always have a reason to feel disappointed with their products. The only way these businesses can stay alive is if they continue to accommodate for the non-stop dissatisfaction. The competition and the complaining is what make the technology and industry prosper and evolve.

Saturday, September 4, 2010

Muddiest Point

I thought that the Wikipedia article on “[Personal] Computer Hardware” was the weakest element of this week’s topic. It has nothing to do with the fact that the source is from Wikipedia or that the reader is warned in advance that the article is in need of a clean-up. I am well aware that Wikipedia is not perfect, but I am always dependent on it whenever I want to have a general idea about a certain topic. So long as Wikipedia can achieve that goal just like any other encyclopedia, I will simply take what I am given. If I want to look more into details about a certain topic, I will simply turn elsewhere. I guess what compelled me to believe there had been something lacking in this one article in particular was the fact that I bore in mind Moore’s Law. Putting into consideration that technological innovations occur at such a rapid pace, I simply was not sure as to whether the information I received was basic enough. If I was to look up an article on “Computer Hardware,” I would anticipate viewing the elements that have always remained through the years. Then again, maybe it was because of these breakthroughs that there would have been so little available to the point of being uninformative. Even if what I have been given is as basic it can get by today’s standards, there is always some likelihood that it can be considered as misleading or inaccurate any day now. However, it makes little difference on how anyone may have perceived the source. Because the information was not etched in stone, it can always be edited in accordance to the contemporaries.

Digitization

Digitization: Is It Worth It?

I often prefer to believe it is. After all, through the digitization of materials, the technology has enabled a more efficient method for people to distribute sources of information. So long as individuals have their own personal computers and entry to the Internet, those circumstances should enable them to easily access those sources. My opinion may seem very optimistic, but I am aware of the consequences that a dependence on this technology can have. There is always a possibility that the hard-drive where the digitized versions of those sources are kept can crash, not to mention even the slightest act of negligence can compromise the well-being of those files. Once this happens, so much information can be potentially lost. Although digitized copies can easily be retrieved by anyone, that scenario clearly demonstrates their lifespan is much shorter. For the case of physical copies, it is the other way around; thus not one form is superior to the other. This is why people need to be aware that digitized copies are not intended to replace the physical copies, but to complement them.

Digitization is expensive, how to sustain it? Is working with private companies a good solution? Any problems that we need to be aware for this approach?

When financing is the issue, the best way to sustain digitization is to either seek more funding or reevaluate the spending. There is more to handling the technology than just buying the equipment. Unless the current staff members know or learn how to use it and are willing to take up the tasks without extra charge, the organization would need to hire more workers to maintain the equipment. If the treasury does not have enough to afford such expenses, then the organization will have to wait and save until it can. Such contributions as generous donations could speed up the process. And I may be speaking through my personal experiences on this issue, but I would never recommend the option of working with private companies, with the exception of small businesses. When I think of private companies, I tend to think of the greedy corporations that are responsible for our current economic situation. Because of their reckless behavior that persisted since the years of the Bush administration and its laissez-faire policies, they are bound to take control of and mess up on everything the moment the opportunity is available to them. I tend to believe that smaller businesses stay more true to their word and are far more deserving of trust.

“Risk of a crushing domination by America in the definition of the idea that future generations will have of its world” Is this a valid concern?

I believe it is a valid and very serious concern. What often allows an empire to prosper is the advocating of tolerance. As people from different backgrounds are allowed to peacefully coexist, they are also able to bring more ideas to the table with less fear of persecution. Once the empire starts utilizing these ideas, it is more able to prosper. However, the people can always get too comfortable from the progress. In order to further satisfy their materialistic needs, they need to turn elsewhere to consume all the resources, which can never be done without making more enemies. As the empire, with its poorly disciplined and gluttonous residents, looks for more places to consume without end, while instigating more conflicts, it is only a matter of time that everyone from the outside will unite against their common enemy; thus leading to its destruction from both the outside and the inside. The United States is in a similar situation. Through its civil liberties, our nation achieved the prosperity that we have today, with our technologies as some of the greatest in the world. As inspiring as this may sound, it is just as disturbing to realize that this is the same country with one of the worst academic systems in the world. Putting into consideration that stupid people are able to wield the most advanced technologies in the world and dominate the Internet, what we have here is a global disaster waiting to happen.

Any other issues pop up.

We have every right to be fascinated with what these technologies can accomplish, but people fail to realize there is a responsibility they need to uphold on their part. The utilization of such equipment can make a work load easier in so many different ways compared to a current model an organization abides for performing its duties. But the equipment can also make the work load difficult in other ways, such as maintenance. It can also serve as an even greater burden if the organization has no clue as of how to use the equipment. Sometimes a business or an institution can go bankrupt from trying to stay up-to-date with these technologies. This is why it is important to never dispose of the old models for which organizations conduct their work. Whenever they incorporate immediately these breakthroughs, it is often done without thinking through the matters, or at least not thoroughly enough. Without a clear plan as of how to use the equipment, it is simply put into place and expected to get everything done right with the snap of the fingers. Ironically, the exact opposite happens, making an even greater mess than before. By preserving the old model, as imperfect as it may be, it can always serve as an emergency back-up to reestablish order when the new ideas turn out to be a disappointment.

Week 2: Computer Hardware

As demonstrated in the Wikipedia article about “[Personal] Computer Hardware,” it is essential for people to know what basically consists of a typical computer system. To comprehend how the equipment works, there should obviously be a description provided, which includes a list of its components, the functions they contribute, and how they complement each other to formulate this system. This source of information is useful to have, but it is just as important not to depend on to it, as what the Wikipedia article and the Scientific American video on Moore’s Law seem to indicate. Because the law claims that transistors within circuits double in amount at a pace of about every two years, there is an increasing likelihood a current model that is being observed happens to be outdated. This is also means people will need to act quicker in terms of comprehending the latest breakthroughs. However, there is never any reason to dispose of such information, which the Computer History Museum is able to demonstrate. To have better understanding as of how the computer system works, people also need to learn about its history. By preserving the models that were used in each of their contemporary times, people are able to learn about what inspired the successive innovations that followed. It is by understanding the past that we are able to confront the future.

It appears to me that the evolution of computer systems clearly reaffirm Thomas Kuhn’s theory on scientific revolutions. As people try to have a better understanding of the world, they go out and investigate. Based on whatever data they have gathered, they try to organize the details. In order to organize them, a model needs to be created. Somewhere down the line, there is an anomaly that seems to contradict this model. One thing out of the ordinary after another, the model needs to be restructured or even replaced for the sake of accommodating for those anomalies, only to repeat the cycle. In parallel with Kuhn, what led to the creation of the very first computer was probably a need for people to develop a more efficient model to organize information (i.e. the ability to carry so much information in so little physical space). What seems to be the anomaly were the discovered flaws that had been hindering progress. Once they were confronted and handled, newer versions were presented each time, eventually leading to the models we have today. As of now, the latest anomaly involves the transistors and circuits. According to the video on Moore’s Law, there is a limitation as to how many transistors circuits can utilize. This can potentially lead to one of the following in the near future: the reformatting of circuits so as to accommodate for more transistors, the reformatting of transistors so as to accommodate for the circuits, or the reformatting of both to accommodate each other. Either way, we are likely to witness a breakthrough completely different from what we have today within a matter of ten years.

Tuesday, August 31, 2010

RFID and Libraries

Is RFID really useful in libraries?

I am willing to whole-heartedly acknowledge that the RFID would be extremely useful for libraries. Whenever patrons misplace items they have barrowed and are unable to find them, this piece of technology can help librarians locate their possessions. The only way the item can be officially declared as lost is if it gets destroyed and/or the equipment has been removed, which also means there is no guarantee of tracking someone who committed a theft. The situation can be just as frustrating when items are lost within the library, which those who have experiences in shelf-reading (such as myself) often witness. Most patrons have tendencies to place the books they glanced through briefly into random places amongst the shelves. I would have believed they have trouble comprehending the Dewey Decimal System had it not been for the fact that they can make quite a mess in the fiction section as well. Sometimes the more considerate patrons can also be a burden. There are occasions when those people believe that they are saving the staff some trouble by putting a book they borrowed for a while right back into the original location, without handing it to the front desk. However, this only does the opposite. Because the system will claim that the book was never returned, the patrons end up being penalized. Through the use of RFID, it can certainly relieve so much hassle in terms of helping staff members within libraries search for books.

Are privacy concerns about RFID in library a real concern?

There is no argument that the issue of privacy should spring up into people’s minds concerning the utilization of RFID. I am sure it is disturbing enough for patrons that the computer systems in the libraries are keeping track of their identities and where they live, since they must provide such information for validation purposes when applying to receive their library cards. Because the database also contains all the items each individual has ever borrowed, technically speaking information about their personalities is also being documented. By throwing the RFID ingredient into the mix, the recipe is becoming slightly more disastrous, now that the library is able to track the patrons down wherever they ago, assuming they always carry the borrowed items with them. If the government wanted to spy on us, then I am sure the public officials will turn to more places than the local libraries to do their dirty work, with more than just RFID. The biggest concern, or at least in my opinion, about RFID in libraries is how the staff is going to handle the situation. Because the items from the library can be detected with more ease, there is always the possibility of a certain member being able to use the technology for stalking the patrons with greater efficiency. It is bad enough they can easily find out where the potential victims live.

How to make RFID a better technology for libraries?

The best way RFID can be a better service to libraries is if the staff members are required to follow a strict set of guidelines on how to use the technology. There is as much of a guarantee that those within the library system will respect those rules as they already do for whatever guidelines are in use concerning the use of technologies currently available. If that library is capable of wielding those equipments with responsibility, then it is safe to say that adding one more should not be much a problem. There is the issue of figuring out how it works and how to use it, but other than that, the staff members should be able to have a good grasp around it like everything else they had to acquire. It is just one more protocol to memorize. If I was to decide under what condition to use RFID after the implementation, then I would strongly suggest that the staff consider such an option when a situation gets a little out of control. As I have mentioned before, when patrons admit they have misplaced what they borrowed and cannot find them, or the items are probably jumbled up someplace random within the library, only then should RFID be put into consideration.