Monday, September 27, 2010

Muddiest Point

As I looked through the required readings, I had noticed that there was one article on databases, one on metadata, and one on an example of metadata being put into use. Putting those circumstances into consideration, I think it would seem reasonable that there should have also been an article on one type of a database. Of course, I am aware that the article of the Dublin Core Data Model managed to cover the issue. However, I still believe there should have been one example of popular database program being explained so that we as the readers can get better acquainted with the concept of what a typical database in current use is supposed to resemble. It may even help to have other articles on different examples of database programs. The explanations that they all provide should allow the readers opportunities to make comparisons. As soon as the readers reach to the article about the Dublin Core Data Model, they are able to witness as to whether such a project can wield the potential of being universally embraced. Otherwise, if we are simply to read the article offhand, whatever promises it claims to deliver, we are in a situation where we can only accept the author’s word.

Week 5: Comments

Comment #1: http://jsslis2600.blogspot.com/2010/09/week-4-reading-notes.html

Comment #2: http://pratt2600.blogspot.com/2010/09/unit-5-reading-notes.html

Week 5: Information Organization by Database, Metadata

See this link: http://att16.blogspot.com/2010/09/week-4-information-organization-by.html

Assignment 3

Part I: Jing Video

Video: http://www.screencast.com/t/M2U4YzI1Mz

In this video, I am demonstrating how to create a greeting card. The program I have used is "Print Artist: Version 23." Viewers need not to repeat the same, exact steps as I did in the recording, but can always turn to this source as an example to follow. I hope what has been presented will be of service to those who are watching.

Part II: Jing Screen Capture Images

Image 1: http://www.flickr.com/photos/54018848@N07/5030634473/

Image 2: http://www.flickr.com/photos/54018848@N07/5030633515/

Image 3: http://www.flickr.com/photos/54018848@N07/5031241652/

Image 4: http://www.flickr.com/photos/54018848@N07/5030623185/

Image 5: http://www.flickr.com/photos/54018848@N07/5031238338/

Image 6: http://www.flickr.com/photos/54018848@N07/5030620059/

Image 7: http://www.flickr.com/photos/54018848@N07/5030618939/

Image 8: http://www.flickr.com/photos/54018848@N07/5031234590/

Image 9: http://www.flickr.com/photos/54018848@N07/5030616709/

Image 10: http://www.flickr.com/photos/54018848@N07/5031232570/

Image 11: http://www.flickr.com/photos/54018848@N07/5031231388/

Image 12: http://www.flickr.com/photos/54018848@N07/5031229316/

Image 13: http://www.flickr.com/photos/54018848@N07/5030611127/

Image 14: http://www.flickr.com/photos/54018848@N07/5030610117/

Image 15: http://www.flickr.com/photos/54018848@N07/5030608905/

Image 16: http://www.flickr.com/photos/54018848@N07/5030607985/

Image 17: http://www.flickr.com/photos/54018848@N07/5031223430/

Image 18: http://www.flickr.com/photos/54018848@N07/5030605369/

Image 19: http://www.flickr.com/photos/54018848@N07/5030603671/

Image 20: http://www.flickr.com/photos/54018848@N07/5030602115/

Image 21: http://www.flickr.com/photos/54018848@N07/5030601033/

Image 22: http://www.flickr.com/photos/54018848@N07/5030599769/

Image 23: http://www.flickr.com/photos/54018848@N07/5031215248/

Image 24: http://www.flickr.com/photos/54018848@N07/5030597107/

Image 25: http://www.flickr.com/photos/54018848@N07/5030592917/

Thursday, September 23, 2010

Muddiest Point

I sort of felt conflicted between the Wikipedia article on “Data Compression” and the DVD-HQ article on “Data Compression Basics.” Because both sources specifically mentioned about “lossy” and “lossless” formats in regards to this tool, I knew right away I had to focus my attention on that concept. Of course, this saved me a lot of trouble in terms of where to look, only to realize the path I took had another barrier of vagueness to confront. I later found myself in a situation where I was constantly looking back and forth between the articles, just for the sake of making sure I was interpreting those terminologies correctly. I started having a strange feeling the two sources felt like one and the same article. Whatever description I was able to conjure, I had to make it was not only brief enough to be compacted into a nutshell, but also able to establish a connection with the other two articles. Otherwise, I would have ended up needing to rewrite everything I had already provided, as well as to reread all the articles, in hopes of having better luck of trying to find another way all the required readings could possibly be related.

Week 4: Comments

Comment #1: http://guybrariantim.blogspot.com/2010/09/week-4-readings.html

Comment #2: http://pittlis2600.blogspot.com/2010/09/week-four-reading-notes.html

Wednesday, September 22, 2010

Week 4: Multimedia Representation and Storage

The benefit of being able to compress digital materials is that extra room can always be made for incoming storage. How much space can be made available often depends on the size of the compressed files put together and the amount of storage the hard-drive can handle. Nevertheless, by decreasing the size of the materials, more room is able to be made. However, the process is not exactly straightforward. People can always debate with themselves as to whether data should be compressed in a “lossy” or “lossless” manner. For the case of the former, although the data is compressed to a size smaller than to what the other does, it ends up disposing a bit more data. As for the latter, although it may take up a bit more room in comparison, it is more likely to keep the data intact. Which version is to be considered as suitable would have to depend on whether the individual is more concerned about the amount of storage available or the quality of the data. This sort of situation had to have placed the University of Pittsburgh’s Digital Research Library into a dilemma after the Institute of Museum and Library Services provided a two-year grant. The staff needs to think about not only preservation issues via digitization, but also financial. There is more to consider as to whether it is better to settle for a format that can save more space, yet ruin the quality, or vice versa. What also needs to be put into consideration is which one is cheaper. Fortunately, Paula L. Webb was able to explain that a compromise can be possible. If libraries can bring such technologies as what YouTube.com has to offer to their advantage, then they are able to create so much more room for making so much more material available and accessible without spending as much money or considering the necessity of sacrificing the quality of the data. By pursuing such an opportunity, infinite space can be made to store and organize sources of information at its best quality for free.

Although the compression of data certainly has its potential, just like any other technological breakthrough, it is important to never give in all at once. I can vouch for this based on a recent experience that is still posing as a problem on me. Since the very beginning of my studies at this master’s program, I cherished the information I was being given about this profession and I knew I was going to need it all for my career. Unfortunately, I also knew I was bound by timing constraints, which means I have been unable to savor and analyze the knowledge the way I had wanted. That is why I was taking the precaution of saving all the work, including my homework, reading materials, and video lectures. I later learned that there was too little space left on my hard-drive. Because the next semester, which is the one I am in right now, was coming up, I wanted to resort to some kind of a quick fix to organize and save space. I basically compressed the files, placed them in compressed files, and into another. I made some room, but it did not really make much of a difference. Now, whenever I try to access those files, I am simply barred out. The only way it is possible is if I empty my hard-drive of material I will not need. Since I am too busy with my school work, I simply do not have the time at the moment to look through my entire computer and carefully select what I should render as garbage. As a consequence, I am currently unable to look back at my work whenever I need reinvestigate certain issues of which I am reminded. If there is a valuable lesson to learn from this experience, it is to always have a clear understanding as of how a technology works before attempting anything. When it comes to compression, it is always important to preserve the original versions of the sources so in case anything goes wrong, a back-up will be available for an emergency. Otherwise, you will just have to end up dealing with data that is serving no purpose.

Tuesday, September 21, 2010

Week 5: Information Organization by Database, Metadata

Whenever information needs to be gathered, its earliest form often comes as raw data. Although there are a lot of interesting facts to come across, what is being presented appears as nothing more than a jumble serving no purpose. The situation begins to change with the introduction of the database. The objective behind the database is to order the data in a manner that relates to the information it tries to provide. The facts become a lot more comprehensible via organization, but the database alone is not the only tool capable of accomplishing such a task. Metadata also has the ability to organize data in its own unique way. The data can be managed by the features with which it is associated, i.e. the “content, context, and structure” is serving as the “data about data” that can help get things into better order. One example of metadata being put into good use is Dublin Core Data Model. What the model attempts to achieve is to establish a means that can be universally embraced amongst various professions for categorizing and organizing materials. Although it is still under development, even if the finished product turns out to be far from perfect, it can always serve as a good start in the right direction.

The issues concerning metadata, as well as databases, remind me of a fable attributed to Aesop entitled, “The Man, the Boy, and the Donkey” (Be sure to go to the following link to have a better understanding of what I am explaining: http://mythfolklore.net/aesopica/jacobs/62.htm). When it comes to organizing data, the database needs to be structured in a manner that anyone can comprehend. If it reaches to a situation when one too many people are having difficulty trying to figure out the design, then the database would need to be reconfigured or replaced. The introduction of metadata seemed like the simplified solution that people always wanted to have. However, when the technology was incorporated into the Internet, leading to the creation of the meta-tag, people were quick to express discomfort. Because sites can potentially be labeled, the scenario should be self-explanatory as of why such a reaction had been given. In regards to what the Dublin Core Data Model is trying to accomplish, as much as I want the staff that is developing it to succeed, I still believe the model will be met with a lot of disappointment. As the fable I posted tried to explain: “Please all and you will please none.”

Friday, September 17, 2010

Week 3: Comments

Comment #1: http://jsslis2600.blogspot.com/2010/09/week-3-reading-notes.html#comments

Comment #2: http://pittlis2600.blogspot.com/2010/09/week-three-reading-notes.html#comments

Week 2: Comments

Comment #1: http://jsslis2600.blogspot.com/2010/08/week-2-discussion-topics-notes.html#comments

Comment #2: http://pittlis2600.blogspot.com/2010/09/week-two-reading-notes.html#comments

Week 1: Comments

Comment #1: http://jsslis2600.blogspot.com/2010/08/introduction-and-week-1-readings.html#comments

Comment #2: http://pittlis2600.blogspot.com/2010/08/week-one-reading-notes.html#comments

Assignment 2

My Flickr account image collection: http://www.flickr.com/photos/54018848@N07/

Muddiest Point

If I had to choose what I felt was too vague about this week’s topic, I would have to say it was the readings themselves. Of course, I am well aware that they were talking about computer software, but what is it in particular was I supposed to know about the topic? I did not feel this way about the Paul Thurott article on Windows, since it gave a more straightforward approach. As for the Machtelt Garrels reading on Linux and what was available on Mac OS X, I felt as though the information was all over the place. I wasted so much time trying to figure out in what way the readings were supposed to be connected and how it could all be summarized. To me, information on computer software can be reworded, but it cannot be summarized. The facts are simply taken as they are being presented. It is merely a matter of wording it in a manner that the other person can comprehend. It was not until I discovered about the three computer software systems having a shared history that I finally had something to analyze and worth writing. If I want to have a general idea about a certain computer software system, of course I will look into any source that will be available to me. But if there is any issue in particular about a computer software system that I need to know about, I would like to have a more concentrated article.

Google Desktop

After you have fun, you can think of a question. What does this tell us about the future of library and librarian?

I will need to make a reference to David Weinberger’s theory on “The Three Orders of Order” to answer this question. Of course, a library must fulfill its duty by gathering sources of information, and then attempting to organize them. It is logical to assume that the more a collection will increase, the more difficult it will become to organize it. This also means greater possibilities for desired items being lost in the shuffle, therefore irretrievable. What contributes to the difficulties is never really the increase of the amounts in itself, but rather there are so many books that can belong to so many different fields of study. Whatever model needs to be used for organizing the books, it needs to be carried out all the way. The introduction of the Dewey Decimal System managed to solve the problem, but the man who invented it (and he was a very eccentric man indeed) had been corrupted by his Christian bias and Western Eurocentric views. Just about any book that concentrates on fields outside of those spheres would obviously have some kind of difficulty making its way into a collection that utilizes this model. Despite the imperfections, including some of its discriminating features, libraries still use this system on the grounds that it gets the job done at the least. Regardless as of how the actual books have been arranged, computer software systems offer so much flexibility in terms of organization. The availability of books within the library can be shown just by typing any word in the search engine. When the algorithm used by the search engine is unable to display the results that the patron had been seeking, the search can be narrowed down even further through certain filters, allowing users to seek an item by the author, title, date, publication, genre, etc. Through the use of folksonomy, people are able to exercise their own methods as of how they identify items by tagging them. Although these features are giving the patrons better chances of finding what they need, whatever has been invented and introduced shall always remain far from perfect.

What patrons need to realize is that librarians cannot do everything for them. Just because librarians spend a lot of time surrounded by books, it does not necessarily mean they actually take the time to read them all. It is never as though a patron can just provide a description of a book with just a few details and expect a librarian to know automatically what the person is talking about and then immediately retrieve it. Of course a librarian needs to know how the library’s organizational system works, be it the Dewey Decimal System or the Library of Congress System. As the libraries incorporate newer technologies for their services, it is just as vital for the staff members to know how to use the equipment as well. Whatever resources the librarians are able to utilize, the most they are really able to do is narrow down the search. Without the guarantee the item will actually be found all the time, the patrons need to take it from there. The patrons also need to realize that since the librarians, who are not perfect, have organized the materials based on models that are not perfect either, a particular item may not always be found in locations they anticipate. For example, there is a book on French cuisine. The librarians would have to decided as to whether it should belong in a section devoted to French culture or cooking, seeing as how it covers both fields. The patrons can debate all they want as to what section it should have belonged, but it in the end, this is the decision the librarian made for the sake of getting the job done. At least the computer system is more capable of reaching a compromise. However, even if the computer system is able to give an exact location as of where the item is located, it does not necessarily mean it will be found. Because most patrons never take the time to familiarize themselves with any organizational system, they have a tendency to throw books into some of the most random places all over the library. Regardless as to what sort of solutions are being provided in the future, whether they are intended for the physical or digital formats, something will always get lost in the shuffle in one form or another. Regardless as to how efficient the system becomes, the patrons will always have certain issues to bring up to the librarians. Then again, it is because of those complaints that technologies evolve.

Thursday, September 16, 2010

Week 3: Computer Software

As computer hardware had been evolving, so did the means to operate the equipment, which in turn inspired the creation of computer software. What inspired the creation of the computer software were people’s needs to operate on a system that is much smaller and more sophisticated in appearance, and simpler and reusable for using. This was the goal that UNIX had been able to achieve, which eventually became Linux by maintaining the model. As effective as it may have been, it was certainly not perfect, as what Steve Jobs was able to demonstrate. During the 1980s, Apple introduced a newer version of the computer system, which actually borrowed certain elements from UNIX, while incorporating some of their own ideas. The renovating of the system became another model of its own, which later led to the creation of Mac OS X. What Bill Gates had conjured, which later led to the creation of Windows XP, Windows Vista, and Windows 7, had certainly been an accomplishment that was not itself very unique either. In the end, the computer software products that we have are nothing more than what originally began as variations of the same system, eventually prospering in their own directions. Regardless as to how much one is differentiating from the other, the core elements seem to be remaining.

The parallel between the evolution of computer hardware and computer software lies in people’s demands for tools that are more presentable and easier to utilize. At first, such technologies used to be solely available to the military. However, big companies were able to obtain them first for their own use. Seeing as how they have the money, it seems logical that the best materials are often given to the highest bidders. Much like how the U.S. Government constantly felt a need to update its equipment for the sake trying to remain a step ahead of the Soviet counterpart, businesses are driven by the same urge when it comes to outdoing the competition. Each competitor tries to gain an advantage by looking for the flaws that the products have and then trying to improve on it by presenting a much better version of the previous. However, if there is anyone who is good at finding any short-comings, it is the consumers themselves. Because people by nature can never be satisfied, they will always look for any excuse to complain, and they will always have a reason to feel disappointed with their products. The only way these businesses can stay alive is if they continue to accommodate for the non-stop dissatisfaction. The competition and the complaining is what make the technology and industry prosper and evolve.

Saturday, September 4, 2010

Muddiest Point

I thought that the Wikipedia article on “[Personal] Computer Hardware” was the weakest element of this week’s topic. It has nothing to do with the fact that the source is from Wikipedia or that the reader is warned in advance that the article is in need of a clean-up. I am well aware that Wikipedia is not perfect, but I am always dependent on it whenever I want to have a general idea about a certain topic. So long as Wikipedia can achieve that goal just like any other encyclopedia, I will simply take what I am given. If I want to look more into details about a certain topic, I will simply turn elsewhere. I guess what compelled me to believe there had been something lacking in this one article in particular was the fact that I bore in mind Moore’s Law. Putting into consideration that technological innovations occur at such a rapid pace, I simply was not sure as to whether the information I received was basic enough. If I was to look up an article on “Computer Hardware,” I would anticipate viewing the elements that have always remained through the years. Then again, maybe it was because of these breakthroughs that there would have been so little available to the point of being uninformative. Even if what I have been given is as basic it can get by today’s standards, there is always some likelihood that it can be considered as misleading or inaccurate any day now. However, it makes little difference on how anyone may have perceived the source. Because the information was not etched in stone, it can always be edited in accordance to the contemporaries.

Digitization

Digitization: Is It Worth It?

I often prefer to believe it is. After all, through the digitization of materials, the technology has enabled a more efficient method for people to distribute sources of information. So long as individuals have their own personal computers and entry to the Internet, those circumstances should enable them to easily access those sources. My opinion may seem very optimistic, but I am aware of the consequences that a dependence on this technology can have. There is always a possibility that the hard-drive where the digitized versions of those sources are kept can crash, not to mention even the slightest act of negligence can compromise the well-being of those files. Once this happens, so much information can be potentially lost. Although digitized copies can easily be retrieved by anyone, that scenario clearly demonstrates their lifespan is much shorter. For the case of physical copies, it is the other way around; thus not one form is superior to the other. This is why people need to be aware that digitized copies are not intended to replace the physical copies, but to complement them.

Digitization is expensive, how to sustain it? Is working with private companies a good solution? Any problems that we need to be aware for this approach?

When financing is the issue, the best way to sustain digitization is to either seek more funding or reevaluate the spending. There is more to handling the technology than just buying the equipment. Unless the current staff members know or learn how to use it and are willing to take up the tasks without extra charge, the organization would need to hire more workers to maintain the equipment. If the treasury does not have enough to afford such expenses, then the organization will have to wait and save until it can. Such contributions as generous donations could speed up the process. And I may be speaking through my personal experiences on this issue, but I would never recommend the option of working with private companies, with the exception of small businesses. When I think of private companies, I tend to think of the greedy corporations that are responsible for our current economic situation. Because of their reckless behavior that persisted since the years of the Bush administration and its laissez-faire policies, they are bound to take control of and mess up on everything the moment the opportunity is available to them. I tend to believe that smaller businesses stay more true to their word and are far more deserving of trust.

“Risk of a crushing domination by America in the definition of the idea that future generations will have of its world” Is this a valid concern?

I believe it is a valid and very serious concern. What often allows an empire to prosper is the advocating of tolerance. As people from different backgrounds are allowed to peacefully coexist, they are also able to bring more ideas to the table with less fear of persecution. Once the empire starts utilizing these ideas, it is more able to prosper. However, the people can always get too comfortable from the progress. In order to further satisfy their materialistic needs, they need to turn elsewhere to consume all the resources, which can never be done without making more enemies. As the empire, with its poorly disciplined and gluttonous residents, looks for more places to consume without end, while instigating more conflicts, it is only a matter of time that everyone from the outside will unite against their common enemy; thus leading to its destruction from both the outside and the inside. The United States is in a similar situation. Through its civil liberties, our nation achieved the prosperity that we have today, with our technologies as some of the greatest in the world. As inspiring as this may sound, it is just as disturbing to realize that this is the same country with one of the worst academic systems in the world. Putting into consideration that stupid people are able to wield the most advanced technologies in the world and dominate the Internet, what we have here is a global disaster waiting to happen.

Any other issues pop up.

We have every right to be fascinated with what these technologies can accomplish, but people fail to realize there is a responsibility they need to uphold on their part. The utilization of such equipment can make a work load easier in so many different ways compared to a current model an organization abides for performing its duties. But the equipment can also make the work load difficult in other ways, such as maintenance. It can also serve as an even greater burden if the organization has no clue as of how to use the equipment. Sometimes a business or an institution can go bankrupt from trying to stay up-to-date with these technologies. This is why it is important to never dispose of the old models for which organizations conduct their work. Whenever they incorporate immediately these breakthroughs, it is often done without thinking through the matters, or at least not thoroughly enough. Without a clear plan as of how to use the equipment, it is simply put into place and expected to get everything done right with the snap of the fingers. Ironically, the exact opposite happens, making an even greater mess than before. By preserving the old model, as imperfect as it may be, it can always serve as an emergency back-up to reestablish order when the new ideas turn out to be a disappointment.

Week 2: Computer Hardware

As demonstrated in the Wikipedia article about “[Personal] Computer Hardware,” it is essential for people to know what basically consists of a typical computer system. To comprehend how the equipment works, there should obviously be a description provided, which includes a list of its components, the functions they contribute, and how they complement each other to formulate this system. This source of information is useful to have, but it is just as important not to depend on to it, as what the Wikipedia article and the Scientific American video on Moore’s Law seem to indicate. Because the law claims that transistors within circuits double in amount at a pace of about every two years, there is an increasing likelihood a current model that is being observed happens to be outdated. This is also means people will need to act quicker in terms of comprehending the latest breakthroughs. However, there is never any reason to dispose of such information, which the Computer History Museum is able to demonstrate. To have better understanding as of how the computer system works, people also need to learn about its history. By preserving the models that were used in each of their contemporary times, people are able to learn about what inspired the successive innovations that followed. It is by understanding the past that we are able to confront the future.

It appears to me that the evolution of computer systems clearly reaffirm Thomas Kuhn’s theory on scientific revolutions. As people try to have a better understanding of the world, they go out and investigate. Based on whatever data they have gathered, they try to organize the details. In order to organize them, a model needs to be created. Somewhere down the line, there is an anomaly that seems to contradict this model. One thing out of the ordinary after another, the model needs to be restructured or even replaced for the sake of accommodating for those anomalies, only to repeat the cycle. In parallel with Kuhn, what led to the creation of the very first computer was probably a need for people to develop a more efficient model to organize information (i.e. the ability to carry so much information in so little physical space). What seems to be the anomaly were the discovered flaws that had been hindering progress. Once they were confronted and handled, newer versions were presented each time, eventually leading to the models we have today. As of now, the latest anomaly involves the transistors and circuits. According to the video on Moore’s Law, there is a limitation as to how many transistors circuits can utilize. This can potentially lead to one of the following in the near future: the reformatting of circuits so as to accommodate for more transistors, the reformatting of transistors so as to accommodate for the circuits, or the reformatting of both to accommodate each other. Either way, we are likely to witness a breakthrough completely different from what we have today within a matter of ten years.