Wednesday, October 27, 2010

Muddiest Point

I had a problem with the Ogbuji article, as well as the XML Schema tutorial. To my understanding, the objective of this week was to have a general idea about XML. When I came across the Ogbuji article, I felt like it was quite a jump compared to the Bryan and Bergholz article. I did not know as to whether I was supposed to learn about all those “standards” for the sake of fulfilling my agenda during that particular unit. The moment I saw the tutorial from W3Schools, I assumed that of all the “standards” on Ogbuji’s list, the one on XML Schema was probably the most important. I could assume from there that after learning about what purpose XML basically serves, I was supposed to know what sort wonders it is capable achieving, then going more into details into one of those wonders. Putting that factor into consideration, perhaps that one “standard” was simply picked at random, and maybe the notion of one being more important or useful than the other could be based more on the situation being confronted. If randomness was the case, perhaps for reasons of fairness it best not to choose one over another at all. Whatever it may be, I do know this for certain: As long as the objective of this unit was only to have a general idea about XML, I believe that the articles from Bryan and Bergholz alone would have sufficed.

Week 9: Comments

Comment #1: http://adamdblog.blogspot.com/2010/10/unit-9-reading-notes.html

Comment #2: http://jobeths2600blog.blogspot.com/2010/10/unit-9-readings.html

Week 9: XML

The creation of the website is one method of distributing information, which can be accomplished by having a basic understanding of HTML. Although the language is the essence of the website, the coding can be extremely tedious to execute. In need of simplification without the risk of sacrificing the quality, the introduction of CSS has demonstrated that such a transition was possible. However, there has been yet another need to be spared from trouble without cutting corners, which in turn led to the creation of the Extensible Markup Language (or XML). The objective of XML is to further simplify the process of distributing information. Because the language utilizes a process where documents can be exchanged, information can be made available via files instead of going through the hassle of publishing the entire content through a website, which in itself takes up a lot of time and space, and makes the source available only in one format. In order for users to learn how to achieve such simplification, they would need to learn the language. While Martin Bryan was able to introduce the language to the readers, André Bergholz provides an opportunity to go a little more into details, since the latter demonstrates XML being put into use by showing examples of the coding. Although both authors do their part in giving a general idea about the language, what they present are mere samples compared to what else XML has to offer. Uche Ogbuji has shown that because of the potential XML was capable of wielding, it was able to inspire the creation of other technologies that also made use of the language, with one of them including the W3C XML Schema. Such a breakthrough was able to simplify the process of handling XML files even further by breaking down and organizing documents of the language in a manner that can recognize authenticity and transfer content more easily. Of course, this goal cannot be achieved without having a better understanding of this program as well, which is what the availability of the “XML Schema Tutorial” from W3Schools tries to accomplish. The tutorial tries to explain as much as it possible can in a very well-detailed manner, and not to mention it is just as well-organized to help provide reference.

There is a French expression that goes “Plus ça change, plus c'est la même chose,” which is often translated as “The more things change, the more they stay the same.” Such a proverb would especially apply to this case. The objective behind the creation of the website is to enable wider availability for sources of information to be accessed; therefore easier to retrieve. Yet in order to know how information could be published in such a manner, people first had to know how to communicate via HTML. The language obviously had its difficulties, especially since there was so much to memorize and repeat. That was why CSS was invented in an attempt to relieve where the former language imposes stress. Yet in order to achieve that goal, there had to be mastery of the new language as well. Of course, if there were certain aspects about HTML that could be simplified, then a similar case should apply to CSS, which is where XML would come. Yet in order to know how CSS can be further simplified, which in itself tries to simplify HTML, one needs to achieve mastery of XML. Because of what XML was capable of achieving, other technologies were created based on that language, which were intended to simplify certain aspects of it even further, with W3C XML Schema as one example. Yet in order to understand how the language was intended to simplify certain aspects of XML, which in itself simplifies certain aspects of CSS, which in itself simplifies certain aspects of HTML, one needs to master W3C XML Schema. Under the assumption it has not happened already (and I am sure it has), it will only be a matter of time the latest one will go through a similar process. Even when one obtains full mastery of one language, there is still the importance of learning about the predecessors. In case a problem occurs where the simplified version cannot notice, it is by having knowledge of the most essential aspect of the infrastructure (in this case, HTML), that the root cause can be identified and repaired. And because there were so many layers implied intended to simplify everything, such tasks become all the more difficult to straighten when they get out of control. This is the whole irony of the situation, and it only gets more ironic.

Saturday, October 23, 2010

Muddiest Point

If there was any article that I felt was unnecessary to mention, then it was probably the “HTML Cheatsheet.” I am not saying that it had failed to serve a useful purpose. However, if the objective of the “HTML Tutorial” from “W3schools.com” was to give the user a general idea as of how HTML works, then I believe that one source alone could have sufficed. By adding that extra article, I felt as though it was getting a little, extra attention. In my opinion, this becomes somewhat unfair for the other subjects in the week’s topic. If the topic on HTML deserves mention of another source via a “cheat-sheet,” then it would seem logical that CSS should receive the same kind of treatment. Even though the objective of the latter is to simplify where the former appears complicated in one sense or another, I am sure that CSS may have similar issues of its own in terms of trying to memorize so much about the language. People are as much at risk to forget about certain aspects of CSS as HTML, hence why the other language should have a “cheat-sheet” of its own and readily available as well. I just figured that this issue was worth mentioning only for the sake of fairness and personal convenience.

Week 8: Comments

Comment #1: http://acovel.blogspot.com/2010/10/week-8-reading-notes.html

Comment #2: http://adamdblog.blogspot.com/2010/10/unit-8-reading-notes.html

Week 8: HTML and Web Authoring Software

When people have the intelligence to design their own website, they are also given the means organize the information in any manner in which each individual feels most comfortable maneuvering before the launch. The essence of the website is the Hyper-Text Markup Language (HTML) and to achieve a better understanding of the language is to achieve a better understanding on how to express one’s self via the design of the website. For those who do not have the knowledge to do so, there are websites that allow users to obtain such an opportunity. The “HTML Tutorial” from “W3schools.com” manages to give users a detailed, step-by-step approach on how to comprehend the language, which can serve as a perfectly good source of reference for starters. As for those who have achieved a little beyond the beginner level, the “HTML Cheatsheet” from “Webmonkey” can always come in handy whenever the more experience could use a quick reminder. However, regardless as to how well anyone may be able to master the language, no one could ever deny the tediousness of HTML, with such difficulties including certain lines of code being repeated constantly throughout the data. For the sake of lessening the burden and saving more on resources, Cascading Style Sheets (CSS) had been invented. In order for other people’s lives to be easier, it would help for them to have a better understanding of the language, which is the goal that the availability of the “CSS Tutorial” from “W3schools.com” attempts to achieve. Even if someone is able to achieve full mastery of this language as well, if there is to be a collaborative effort on how someone wants information to be organized via the design of a website, there would be communication problems between those who know what they want, but do know how to express it, and those who know how to express it, but do not know what they want. That is why there is the Content Management System (CMS) to create compromise. Those who are more familiar with the goals are able to cooperate better with those who are more familiar with the language, and vice-versa; thus eliminating the frustrations as the two groups collaborate on their project, which was the case for the library liaisons and the web development personnel at the Georgia State University Library.

The situation with these tutorials reminds me of a segment from “Phaedrus,” which is one of Plato’s works. Thoth, an Egyptian Deity, was having a debate with King Thamuz. The God had insisted that his introduction of the writing system to the human race enables information to be recorded; therefore better preserved. The King was pessimistic, claiming that the invention would actually do human beings a disservice, because they are relying less upon memory; therefore neglecting their mental capacity. In parallel with this scene was a debate Plato had with his teacher, Socrates. Socrates was more sympathetic with the King, whereas Plato with the God. The teacher had claimed that the written word can never substitute the spoken word. Although information is being preserved, what the source is able to provide is only confined to whatever has been recorded. When there are any details that have not been made clear and the readers have more questions, words on paper simply cannot respond to them, and the author cannot always be present to explain everything. The phenomenon clearly required an even greater need for human interaction. The availability of those tutorials obviously works in a similar manner. Even though users have an opportunity to learn how to create on their own, the information that is given cannot always be sufficient. Details can be prone to misinterpretation or there could be factors that have yet to be covered. Whatever the case may be, because the user/reader does not have a human contact directly available to provide some sort of guidance, the individual is pretty much stranded in the middle of nowhere. It was probably from realizing the problems these sorts of situations tend to impose that inspired the creation and utilization of the CMS. An important thought to bear in mind from the widespread use of differing methods to communicate is that instead of one substituting another, each should actually be complimenting the other, and none of them could ever be fully replaced by the next.

Thursday, October 14, 2010

Muddiest Point

I think the “Hand on Topic” we were given was the weakest piece of material for the week. I understand the concept of the activity and I do appreciate the lesson it is trying to teach, but I honestly do not believe that I needed to go through those extra steps in order to answer the question at the end. I already knew from personal and professional experiences outside of that activity how to answer that question. I could have easily provided something on the spot and posted it on my blog right away. Instead, for the sake of fulfilling requirements of the homework, I decided to cooperate. This meant going through the time and trouble of not only trying to think up of something worth asking about, but also waiting for someone at the other end to even bother responding. Is it really worth delaying to complete something I could have easily finished in an instant? I simply get irritated whenever I have to endure hassle over matters that are in fact much simpler in nature than they tend to appear. This may be just my imagination talking, but for me, all it takes is some kind of a random technicality or any other circumstance beyond my control to automatically impose unnecessary obstacles, or even difficulties, which prevent me from doing something as a mundane as going from Point A to Point B.

Week 7: Comments

Comment #1: http://skdhuth.blogspot.com/2010/10/week-7-notes.html

Comment #2: http://jsslis2600.blogspot.com/2010/10/week-7-reading-notes.html

Pitt's Virtual Reference

University of Pittsburgh provides virtual reference ask-a-librarian via http://www.library.pitt.edu/reference/. Please choose either the IM version or the email version of the service, and ask a reference question that you are interested to get an answer on.

Based on this experience and any previous experiences of face to face reference, think of the advantages or limitations of this virtual reference. I have created a discussion thread in the discussion board for any discussion about this.

I decided to contact the “Ask-a-Librarian” virtual reference system via e-mail, and this was the question I submitted:

“In regards to the "Instant Virtual Extranet," let me first say how much I appreciate your services, since it always helped me with my school work. However, I do have one concern: Has this library system or campus ever devised a solution for prolonging the connection time? Whenever I access an article through that service and take the time to read, I often find myself in a situation where I need to go through the whole connection process all over again each time I want to go on to the next article. I am not trying to hold anything against anyone. I just want to know if I simply have to deal with it. I am only asking out of curiosity's sake. I hope to hear from you soon.”

By the next day, this was the reply I was given in return:

“Hello Arek Toros Torosian

Thank you for writing to our Ask-a-Librarian service.

We appreciate your taking the time to send us your comments. I'll forward your email to our Web Services Librarian.

In the meantime, I'm wondering if you've considered downloading the articles to your desktop or to a flashdrive once you've located them.  This would allow you to take your time in reading articles without worrying about the amount of time that you take. When you finish reading the articles, you could delete them.”

I need not to wait for the response from the Web Services Librarian to be able to explain the differences between this experience and face-to-face interaction. One major advantage an e-mail has over the more direct, one-on-one method in regards to approaching those who work in reference is that the patrons have the opportunities to carefully compose and double-check what they want to say before bringing it to the other person’s attention. For the case of the face-to-face approach, the patrons need to know how to express themselves clearly on the spot if they want the person at reference to understand and address their concerns. However, I am not trying to suggest that one method is superior to the other. One major advantage that face-to-face interaction has is that the patrons are in a situation where those working in reference are easily able to retrieve the sources to directly present the solutions to the concerns being brought up. For the case of the e-mail, there is always a possibility that the patrons cannot express themselves clear enough, which leaves those working in reference to respond with the wrong solutions (assuming they are able to provide anything in return). This in turn leaves the two individuals in a situation where they are exchanging messages back in fourth until they are able to finally narrow down on the main issue. In the end, whichever is more reliable for presenting concerns, whether by e-mail or one-on-one interaction, would have to depend on what method the individual feels more comfortable using (and to each one’s own).

Tuesday, October 12, 2010

Week 7: Internet and WWW Technologies

One of the latest breakthroughs in technology that enabled us to live in the society we have today is our entry into cyberspace. However, the opening of the gateway by itself did not really create those wonders. In order to for that sort of space to be put into better use, connections needed to be established between separate locations. This goal could not be achieved without building an infrastructure that would allow more links to be assembled with each other, eventually leading to the invention of the International Network, or the Internet. Although there are more possibilities of connections being established, the linking by itself never simplifies the process of actually reaching towards those locations. When the staffs at the libraries were first introduced to the Internet, they knew that the system currently in use, the Integrated Library System (ILS), had to be fully replaced at some point. The incorporation of the Internet never made anyone’s job at the libraries any easier, mostly because the staff was too accustomed to the older model and the transition seemed like it was too much too soon. That is why there was interoperability to establish a compromise. As the staff was learning to utilize the newer models for conducting their work, the former continued to be implemented as a means to provide a sense of guidance. At this gradual pace, the staff members were learning to be more accepting of using the Internet, as they were also becoming less dependent on the ILS, with predictions it could finally be dismantled without any regard in the long-run. And yet if there was anything that had tremendously simplified the process of reaching the destinations or locating the sources we wish to seek, it was the introduction of the search engine, with Google proving itself as a prolific example. Whatever it is that people are looking for, they have a better chance of obtaining it just by typing in a few words. Because of how the algorithm was set up for the technology, the results being presented are based on how often most users tend to associate the websites with the key words. The most popular websites end up being rendered as the most relevant to the search, which in turn allows those results to be the first being recognized by the user. Although what Google had presented is not perfect, as long as it continues to give the people the kind of quick and genuine results they want (and updates the means to do so as well), there is a always a lesser and lesser likelihood anyone would ever turn away from such service.

Regardless as to what sort of breakthroughs modern-day technology is able to provide for mankind, one of the most noticeable flaws always prevailing in each reoccurring transition is that items continue to remain lost in the shuffle. In reference to “Linked: How Everything is Connected to Everything Else and What it Means for Business, Science, and Everyday Life” by Albert-László Barabási, the sources by Jeff Tyson, Andrew K. Pace, and Sergey Brin and Larry Page have demonstrated the evolution of the connection in their respective order. Everything begins with a vast empty space waiting to be filled. The prevalence of nodes fulfills that purpose, but then there is the issue of trying to establish order. The nodes straighten themselves by creating links with each other. A network is established, but the situation still seems like a mess, because there are links going all over the place. The issue regarding the links can be straightened out through the establishment of hubs. Once the hubs are established, more nodes are able to know right away where to establish their links. However, just because order has been established within the network, it does not necessarily mean the network has been perfected. The first nodes that make their way into the empty space have more opportunities to hone and refine themselves. By the time other nodes also make their way in, the older ones have already enhanced themselves enough to attract more attention. This in turn allows the older nodes to gather more nodes around, thus establishing more links with them and converting themselves into hubs. The nature of such a network seems rather unfair for the nodes that come in too early or too late. If they come in at the same time as some of the older ones and are unable to make the same kind of preparations when a new wave of nodes arrive, then chances are they will be overshadowed by the competitors and ignored by the followers they manage garner. When a new node comes in, there is a chance it will also be ignored. If it manages to achieve some recognition, it will immediately come under the wing of a well-recognized hub. The possibilities for any of the newcomers to become hubs themselves seem rather slim, so long as the old timers have the strongest foundations and are able to overpower the competitors with greater ease. The bottom line is so many nodes pretty much end up being lost in the shuffle, simply because they were never able to establish as many links so successfully. Without those connections, very few, if any, people will ever get an opportunity to witness their potential. Of course, that is under the assumption any of those nodes within the majority might even have any.

Tuesday, October 5, 2010

Muddiest Point

I was feeling conflicted between the Wikipedia article on the “Computer Network” and the YouTube video on “Common Types of Computer Networks.” According to Frank J. Klein, the so-called “common types” happen to be the Personal Area Network (PAN), Local Area Network (LAN), Wide Area Network (WAN), Campus Area Network (CAN) and Metropolitan Area Network (MAN). When I look into the Wikipedia, I notice that the list of the different types of computer networks happen to be longer. Putting these circumstances into consideration, I do not know as to whether Klein may have forgotten a few other “common types” that the Wikipedia article took the time to mention or the article was trying to be as information as possible, while the video was simply telling to viewers what they needed to know for starters. For the sake of avoiding overanalyzing the situation with the “Computer Network” article on Wikipedia, I decided to use only the details that related to what Klein had been mentioning in the video. On the grounds that there was probably so much more information I potentially ended up ignoring, I might as well conclude that the “Computer Network” article was probably the least useful of all the readings; therefore the weakest piece of material.

Week 6: Comments

Comment #1: http://mfarina.blogspot.com/2010/10/reading-notes-for-week-6-m-farina.html

Comment #2: http://rjs2600.blogspot.com/2010/10/readings-for-10-11-10-15.html

Monday, October 4, 2010

RFID and Libraries

See this link: http://att16.blogspot.com/2010/08/rfid-and-libraries.html

Week 6: Computer Networks, Wireless Networks

Because the computer is capable of preserving digitally so much information, it would seem logical to develop a means to transfer the content from one source onto the next. Although the invention of the disk had managed to accomplish this task, human nature would once again become dissatisfied in the long run as usual. There was obviously a need to transfer information with much greater efficiency, i.e. simpler and quicker. With the introduction of the Internet, computers were able to create networks; thus establishing the means that would provide such a solution (for the time being, of course). A few types of commonly used computer networks to name are the Personal Area Network (PAN), Local Area Network (LAN), Wide Area Network (WAN), Campus Area Network (CAN; utilizing interconnecting LANs) and the Metropolitan Area Network (MAN; similar to the CAN, only the WAN is incorporated as well) among others. As networks had been developing and catering to more and more individuals, the results in their respective order were ranging for the amounts they serve, from a single person, to a group of people, and to an entire population. Because the library is a system that functions like any other organization, such as a business office, it would seem suitable and logical for a staff to attend their duties via a LAN, which is the sort of network designed for that sort of a setting. Since the library is often recognized as a powerhouse for sources of information, the utilization of such a network should enable for the system more efficient means to organize their materials. However, there will always be circumstances beyond the control of the networks. For example, the computer can always claim that a certain item is in their possession at a certain location, and yet it cannot be found within the system, or at least not in the particular area to which it was pointed earlier. This is why there is the option of incorporating the Radio Frequency Identifier (RFID) into libraries. How the technology works is that the items are tagged with a computer chip and are retrieved with an antenna. Through the use of this innovation, books that end up lost in the shuffle or just about anywhere outside at random (perhaps misplaced by a patron or even stolen by a thief) now have a greater chance of being found by the library.

If I have not already done so more openly, I would like to make a reference to “The World is Flat: A Brief History of the Twenty-First Century” by Thomas L. Friedman. What I am to explain is based what I could recollect from my readings since my senior year as an undergrad. How the book got its title came from the author’s comparison between the increase of networking and the establishment of the playing field. In order for a field to be rendered as playable, the area needs to be flattened. Once it has been flattened, the field becomes an invitation for players and people who want to be involved in the game are always welcome to play. A network functions in a similar manner. Once it has been established, those who wish to be involved are always invited to do so. As more people wanted to get involved with the phenomenon, the network needed to expand so as to make sure they received their opportunities as well. The situation appeared as though the network was becoming one big game and the whole world just wanted to play. In order for people all over the earth to be able to play the game that has reached across the globe, the world needed to be “flattened.” Compared to the Wikipedia articles on the “Local Area Network” and “Computer Network,” and the Frank J. Klein video on “Common Types of Computer Networks,” what Friedman had presented does indeed seem like a promising future ahead of us. Putting the Karen Coyle article on “Management of RFID in Libraries” into consideration, Friedman is beginning to appear more naïve and delusional, since there is a potential dark side he may have overlooked. No one can ever deny what these networks are capable of accomplishing, but the possible dilemma that the RFID is capable of imposing could be a real turn off. Apart from the ethical issues, such as privacy, that the technology has been raising, financial matters also seem to be a concern. Whether or not Friedman’s dream of a “flat world” is possible, it is quite clear that such an achievement would take tremendous time, effort, and money. Even when other people are invited to and participating in the playing field, there is no guarantee that they will enjoy the game (or at least not on their terms); hence the expression: “You can lead a horse to water, but you cannot make it drink.”