Blog

  • Optional Project 28

    Hi there! In this post I will be asking five different AI platforms the same question and comparing the results.

    The five AI programs that I talked to are all built on LLMs so it will be interesting to see what they tell me. The prompt I asked was to “generate a list of primary sources that could be used to write a research paper on the American Revolution.” The AI platforms I used were CoPilot, ChatGPT, Claude, Meta AI, and Gemini.

    A:

    First I used CoPilot which is Microsoft’s AI tool and it came up with 11 primary sources with brief descriptions of each.

    In CoPilot there was a variety of sources that were suggested but they were not organized in any particular way.

    B:

    The second program I asked was ChatGPT which is powered by OpenAI.

    This gave me many sources and organized them by what type of document they are. There was some overlap here from the CoPilot results but since there were so many more results the sources here are not all the same.

    C:

    The third AI program I used was Claude which is powered by Anthropic.

    These search results were similar to the first two but organized differently, and sometimes would suggest primary sources without a document to back it up. For example it suggests to look at “Eyewitness accounts of major battles form soldiers’ journals and letters” but it doesn’t not provide a specific journals or letters to look at.

    D:

    The next AI platform I used is Meta AI which is Meta’s AI tool accessible though Instagram, Threads, Facebook, and other Meta social media platforms.

    These results were organized similarly to ChatGPT’s (B) results in which results are organized in detail and specific sources are listed rather than general suggestions like the Claude AI (C) suggested.

    E:

    Lastly, I asked Gemini which is Google’s AI platform the same question.

    These results were very well organized with a mix of general ideas for sources and specific documents. It also provided a section at the end the was different that lists resources to aid in finding primary sources such as the National Archives and The Library of Congress.

    This exercise has shown that although all five of these AI applications have complete access to the internet, their results still differ. All of the AI platforms were able to locate some of the same documents such as the Declaration of Independence or suggest newspapers like the Pennsylvania Gazette. However all of these AI tools seem to have a problem locating letters and journals of everyday people. Nearly all of the AI responses suggested to use journals and letters but few of them were able to find specific ones. This skews the results as only well known documents are listed making research more focused on powerful figures or already well known events.

    All of the AI tools were able to recognize primary sources well with results listing personal accounts, legal documents, speeches and other similar written documents. The biggest outlier with their results was Gemini (E) which gave me definitions of primary sources rather than specific references to them. For example it gave me the definition of what Broadsides are but did not offer a specific one that I would be able to use.

    I think that CoPilot (A) and ChatGPT (B) came up with the most useful results with specific primary sources that are relevant to my prompt. The least useful AI program for me was Gemini (E) which didn’t suggest any useful information besides places to start the search for primary sources. AI can be a helpful tool but also has to be used carefully as it’s results are not always accurate and are often misleading.

  • Optional Project 25

    Hello! In this post I will be explaining how to use Zotero. This is a free tool that helps organize and store references. To get started I downloaded Zotero onto my device and then added the chrome extension to my browser to have easy access to it. I then access WMU’s library database by googling “wmu library” and entered the phrase “Women in the American Revolution” into the database’s search bar.

    I also filtered my results so only articles would appear and altered the search results so 25 results would appear instead of just 10.

    Next, I clicked the Zotero extension that I had just downloaded and the message below appeared:

    I then clicked “select all” so all 25 of my search results would appear in Zotero. The list below is the 25 results all entered into Zotero and have been saved to my library.

    If an article is not selected then no information will populate on the right hand side of the screen, but when one is selected all the information that Zotero pulled out appears.

    Many different types of information appear when an article is selected it is best to double-check the information and add more if possible. For example adding tags or an abstract manually could be useful for articles where none appear, such as the item below.

    Overall, Zotero is great at adding in bibliographic information, however it can make mistakes and it may need to be cleaned up especially if this information was to be used in a real bibliography. It uses API to gather data that can be used to populate the various items needed for a citation which can help save time and easily find information when collecting sources.

  • Optional Project 22/23

    Hi, welcome back! In this project I will explore the data visualizations that Excel offers. The dataset I used was found at data.gov, it will explore chronic absenteeism across all fifty states in the United States. I will also be using the checklist for visualizations which is listed below. 

    Checklist for Visualizations: 

    • Assess your data: discrete or continuous? 
    • Appropriate scale: Too big? Too small? Need a break? 
    • How will you label the data? What order? What data is most essential? 
    • Use graphic variables carefully: shape, tone, texture, and color convey meanings 
    • Proximity of labels to values is optimal for reducing cognitive load; make it easy for the viewer 
    •  Never use changes in area to show a simple increase in value. 
    • Review the graph to see if it contains elements that are “incidental” artifacts of production rather than meaningful ones. 
    • While illustrations, images, or exaggerated forms may be considered “junk,” they can also help set a theme or tone when used effectively. 

    This is the dataset I will be reviewing:

    The data here is discrete, and the scale is appropriate for the data collected, however for the data that measures the United States as a whole, a break is needed so the information from the individual states can be read more clearly. I will label the data by state and organize it alphabetically to more easily locate data for a specific state.  The data that is most essential is total students that are chronic absentees and what states they belong to. The information about race, disability status, and students who are english language learners is important but it is also important to look at total state demographics when considering this data.

    The first data visualization that I made is below, however once I created this visualization I realized that the total number of students for all of the United States was skewing the data, so I decided to make another visualization without the United States as one of the data points and that is what is pictured in the second visualization.

    This visualization enhances the colors on the map to more accurately visualize the data. While the data is hard to understand by just looking at a map, you can easily see the specific number of chronic absentee students by state by hovering your curser above the state. Above you can see the numbers for Michigan.

    I also created another data visualization which took a deeper look at specific racial demographics of chronic absenteeism.

    In the first graph the United States total data was again skewing the results and making the individual state data more difficult to read so I remade the graph but without the total United States data included.

    In this visualization we are able to much more easily see the differences between the demographics. Colors really help in this way as each smaller group of student demographics is able to be seen here. From both visualizations we can see that Texas and California have higher rates of chronic absenteeism, but we also need to consider that these states are the most populated in the whole country.

    These two data visualizations are different in what data they use and how they present information. Their usefulness is dependent on what one would want to gather from the data.

    From these data visualizations we can gather that Excel has many useful data visualization tools but it is important that we look at these datasets with a critical eye and make alterations to make the data more useful.

  • Optional Project 24

    Hello! In this post I will be designing a project inspired by Lev Manovich’s projects on his page, which I accessed here:  http://manovich.net/index.php/exhibitions/display:list. The article from this site that inspired me the most is titled “From Museum Without Walls to GenAI Museum.” 

    In this article Manovich describes how generative AI can be used for “speculative history” which allows AI tools to take in data and create similar images inspired by the original work and context. The example of this used in the article is AI generated paintings using the artistic styles of well-known Renaissance artists. He also suggests that this kind of AI can be used to create works of art that would emulate artists if they were inspired by other cultures or simulate a collaboration between two artists. All forms of this generative AI use cultural analytics to analyze large-scale data sets. 

    My project would utilize cultural analytics by analyzing artifacts or structures that are damaged or missing aspects of them and reconstruct them. This could be used by many humanities disciplines but especially within history. By using written accounts of an artifact or structure and following the lead of the complete aspects of the item generative AI could completely reconstruct it. This would allow users to better understand the completed look of the artifact or structure to fully appreciate its purpose and value. This could be used in academic settings to teach about how these items have deteriorated over time when compared to the real image. An example of using this on a large scale could be a reconstruction of ancient ruins, one could reconstruct the structures in the ruins to create a complete picture of the area.  

    While this tool could be useful to fill in holes of physical artifacts it could lead to spreading misinformation. AI is flawed just like any other technology and could create an issue with assuming aspects of history that are not correct. This could lead to spreading misinformation and creating narratives with only speculative history as evidence. This tool could be useful, but it is likely that with misuse it could create more issues than solutions.  

  • Optional Project 21

    Hi there! In this project I will be exploring “Linked Jazz” and comparing it to project 19, which explored philosophy articles. In “Linked Jazz” the material collected is meant to connect history and figures in Jazz together using linked data and creating a network diagram. 

    In this diagram there are more interactive aspects that users can explore. This makes the data that someone can extract from the graph more useful. Information that can be extracted from the graph includes pictures, videos, audio recordings, information on each person, interview transcriptions, and connections to other jazz figures.  

    Groups of 8-10 musicians often means that this group is more on the outskirts or a part of a more contained group of artists. An example I found of this is Freddie Green and the people who he interacted with. In this case the other members of the jazz communnity that Freddie Green interacted with are very small. This is more uncommon within this graph but not impossible.  

    This network diagram allows users to filter the graph based on gender demographics. If you click on the gender mode at the top of the page the graph will color code, so men are blue, and women are red. This makes determining gender factors much easier with gender clearly being stated on each person’s circle.

    Race cannot be directly searched but looking at each individual figure there is a short account of their life and career, a picture of them, and a link to their Wikipedia page. From all these items a user could determine a musician’s race, this is flawed as Wikipedia is not a reliable source and one may have to assume someone’s race. However, this is better than the philosophy network where there was no way to easily identify an author’s race or gender. Seniority within the field is not easily searchable, however each musician’s birth date is written in their bio so a user could use that to determine figures who are more experienced or older.  

    The information this resource offers to a newcomer is useful as it is a simple way to connect many jazz figures together. For an expert, some of this information may already be known to them but they are able to use the dynamic mode function to help with their research and add to the network. This function can link musicians together and make the relationships between people more interconnected.  

    A dynamic network like this one is much more useful and interactive than the philosophy network from project 19, but the basis of the two are the same. As mentioned in project 19, a structure like the philosophy network would be helpful to visually construct a historiography where various articles are in conversation with each other. Information that might be included would be authors, their publications, dates of publication, and how what they are saying about a specific topic which would link them together. A structure like the “Linked Jazz” network would be useful for creating a map of people involved in a movement just like jazz or any other one. This would include all the same information as the “Linked Jazz” project with a background on the figures, pictures, videos, dates, and links to each other based on beliefs, actions, or references.  

  • Optional Project 19

    Welcome back, in this post I will be investigating a co-citation network diagram and investigating what the graph can offer. To do this I used “A Co-Citation Network of Philosophy” and interacted with the dynamic network that is linked in the article.  

    From this graph I can pull out the most cited items in articles published in the philosophy journals that were entered into the program and pick out debates that have been popular in recent years. This web considers the ways that philosophers are in conversation with one another to have debates, to do this the network is structured in a way that shows how one publication becomes a jumping off point for others to comment on. For smaller groups with less than 10 co-citers means that there are less contributors to a particular debate, making the network more contained.

    Within the groups the information given is the author’s name, the name of the article, year of the article, and the number of citations. From this information we can only assume and come up with conclusions based on the limited information in the network concerning demographics of the authors mentioned. Gender demographics can be inferred from the names listed however this can lead to issues as assuming this can lead to misinformation. Race cannot be determined besides a level of assuming ethnicity based on the origin of the names provided, but again, this is highly flawed, and it would be ineffective to assume these aspects of identity. Seniority in the field can be shown in the years of publication and the frequency of an author’s name in the network. A more experienced philosopher would have more published works, and the dates of publication would be later than most of the other articles.  

    For a newcomer to this field the information provided can serve as a jumping off point to further explore various philosophical debates and use the network to locate related debates without needing a lot of background knowledge. As for an expert in the field, they can use this data to deepen their understanding of a topic by having access to a web of related information. Experts can see what has already been discussed and add to these debates.  

    This type of network diagram is very useful when discussing topics in conversation with one another. It is useful in the humanities but especially within history as this could create a useful visual resource for a historiography. Within specific topics, historians would be able to visualize how a topic in history has been taught or understood over a period of time by using co-citations. 

  • Optional Project 18

    Hello! In this post I will be exploring two Stanford spatial history projects and comparing their presentation and layout. The two projects I have chosen are “Reconstructing California Conservation History” and “Richard Pryor’s Peoria.” 

    Beginning with both projects on the Stanford University website there is basic information, a gallery of resources, and information about the teams that worked on each project. The Richard Pryor project only has one item in the gallery making it simple for those wanting to access the site to find it. The California Conservation project has five items in the gallery, this is helpful as there are more resources on the topic.  

    Users are able to explore the gallery items for the Reconstructing California Conservation History project, there are five total items in the gallery. Two of these items are articles, providing information on the projects complete with maps, images, and interactive applications to best explain the project. Within the article, all the interactive applications are still functional but many of the images no longer appear. The three other items in the gallery are no longer functional. This is a limiting factor for the designers as the work they put into their application cannot be fully utilized due to out-of-date applications.  

    The designers for the California Conservation project chose to display their data by using written information similar to a traditional academic journal article and through interactive applications. While both can be effective most of the interactive aspects of the project are unusable with the only part of them that still provides information is the abstract and some extra information which is not as effective as a useable application with all data available. The articles that are provided offer detailed information and are longer lasting than the other videos and applications provided. While the videos and applications that are no longer working, they have some information within the description there is no way to fully access the material.  

    The message that the California Conservation team wants to tell is one that explains the conservation efforts throughout California and tracks the effects they had. This information can be valuable to those living in California and have a personal connection to the land and an interest in preserving the environment. This information also benefits future conservationists who can access this information. Those who access this information don’t need to have extensive background knowledge on the subject, but some basic knowledge would help in better understanding the information. People may be interested in gathering more updated data and explore the conservation efforts of other parts of California or other states. 

    Richard Pryor’s Peoria project is different in the material a user can access. This project focuses on the life and career of Richard Pryor as well as his connections to Peoria, Illinois. The only item in the gallery on the Stanford website is a link to a website that serves as a digital companion to the biography Becoming Richard Pryor by Scott Saul. This website is fully functional and has all of the data available on the website.  

    This website is very user-friendly with tabs at the top making navigating through many topics easy. Users can sort through material by people, places, eras, and themes. The designers of the site chose to make sure that all the information on the website would be long-lasting. This is likely because it works in tandem with the book, and this allows readers to access this information. The developers organized the data in a variety of ways, through large essays teaching the history of Pryor’s life, a timeline, and access to many primary sources such as newspaper articles and legal documents. This website would attract people interested in Richard Pryor or Peoria history and readers of Becoming Richard Pryor would be interested in accessing this website.  

    Limitations of this application are that there is an overwhelming amount of information which can make finding details difficult. Information on the site is organized in long paragraphs making users need to sort through large amounts of information.  Another limitation is that finding the website can be difficult if a user does not start on the Stanford website or if someone has already read the book associated with it and knows what to search for to find the website. When entering a simple google search for “Richard Pryor” this website does not appear in the results, making it harder for users to find the site and access it. This website would inspire more people to take an interest in Richard Pryor’s life and Peoria history.  

    Overall, the Stanford spatial history project is a useful resource. However, because the site is no longer being updated it is hard to use all aspects of the projects. If this site were updated, it would be much more valuable. As the site exists now, the ways that the data is presented are effective within the sources that are functional but outside of that the data is useless.  

  • Optional Project 17

    Hi there! In this project I will be looking at datasets through data.gov and identifying what data cities are collecting and why. To get started I went to data.gov and searched up “towed cars” and filtered the results to local government.

    I selected the first result which tracks vehicle repairs and towing in Montgomery County in Maryland.

    I then went into this dataset and put the data into OpenRefine.

    Once I made this data a project I could see all of the details of it. There were 2,426 items in this set.

    From this data you can see that a lot of information is taken for each item including the company that made the repair or towed the vehicle, the town this happened in, contact information, and the date these incidents occurred.

    This information is being recorded by the county because it can track what companies are getting the most business from these occurrences, which can help the economies of that area. It tracks people who have had many instances of their cars being towed and see if there is a specific reason why that is. It also tracks the areas in towns and cities that are common places for cars to get towed, this can help with patrolling those areas better or creating a solution so less people get towed.

    Some issues with this data include a lack of uniformity and ease of use of the data. First, the data entered is not all the same, for example in the “state” tab the word Maryland is entered as “MD,” “Md,” and “Maryland.” All of these different ways of entering the state can make it harder to filter the results because a user would need to enter all three to get all of the information they needed.

    The ease of use of the data is also an issue because there are many sections of the data that are missing, making getting accurate analytics from the data more difficult. The data is not very detailed either, with only basic information being available, with more detailed data government officials would be able to learn even more about vehicle repairs and towing in their county.

    Overall this data is useful for the county and is a great resource, OpenRefine is a useful tool and helps with organizing the data very well and makes the data readable.

  • Optional Project 15

    Hello! Within this post I will analyze two different university’s Omeka sites by looking at the metadata available, what information was presented, and the ease of use of the sites. The two Omeka sites I looked at were Tufts University Library Omeka Site and Johns Hopkins University Sheridan Libraries and University Museums Omeka Site. I found both sites by Googling “Proudly Powered by Omeka” and site:edu, both sites appeared on the first page of my search. 

    The Tufts University site focuses on the history of Ann Radcliffe’s book “The Mysteries of Udolpho” and the Johns Hopkins site explores the life and career of Rosa Ponselle.

    Tufts University Landing Page:

    Johns Hopkins University Landing Page:

    Beginning on the landing pages of both sites you can see how the layout is different for each. The Tufts site is the default appearance with simple blue lettering and a white background. This is the same look of the Omeka site that Dr. Hadden uses, and the class added to last week. This default appearance conveys the message of the content without being distracting. The Johns Hopkins site was edited so there was a customized font and different colors on the page. The overall layout of the sites is also different, while both feature tabs on the top of the page they look different and do different things. Neither of these displays are distracting and do not take away from the information being provided.

    Intended Audiences and Reasons for the Projects:

    The messaging and intended audience for each of these projects was very different, with the Tufts site being more similar to the one that our class added to, being that it was an assignment for a history course. The image above shows how, at the bottom of each page within this project, it was noted that this site was created by the Tisch Library and students in History 96: History of the Book. The intended audience for this project are the other people in the class and the professor of the course, while the Johns Hopkins university site was meant to promote an in-person exhibit.

    This Omeka project was promoted on the Johns Hopkins University official website and had links to other resources associated with the exhibit such as dates and times of the exhibit opening and a concert paying tribute to Rosa Ponselle’s career. As this was all promotional material for the exhibit, the intended audience is the community of Johns Hopkins university who might be interested in Rosa Ponselle’s life and career and come to visit the exhibit. 

    Navigating the sites:

    Both sites work by leading you to the next page with arrows at the bottom of the pages, this helps easily navigate the sites and logically follow the narrative. The Johns Hopkins site only has information on Rosa Ponselle, so all the links and pages are related to her life, career, and the exhibit about her at the Peabody center. On the Tufts site, while easy to navigate the initial pages about Ann Radcliffe’s The Mysteries of Udolpho, there are many other projects within the Omeka site so once you leave the Ann Radcliffe project it is difficult to find your way back. There are also many dead links that will result in an error message or just take you back to the landing page. Due to this, navigating the initial project is simple but once you leave that area the site becomes more complicated to navigate. The other projects on the Omeka site are not all related and have a variety of topics and classes they cover. Many of the sites are in French which can make finding the way back from those pages extra challenging. Below are just a few examples of the variety of projects available on the Tufts site.

    Dublin Core and Metadata:

    Both projects use Dublin Core as they are humanities projects powered by Omeka, but the information available for each project is slightly different.

    The Tufts University project have easy places to find all the Dublin Core information and metadata used for the resources. Again, this is similar to the site that our own class contributed to in which we could all view each other’s individual Dublin Core items.

    The Johns Hopkins University site was not as accessible with Dublin Core and metadata information. While the Dublin Core information is available it is not always accessible and there are times when looking for data and only limited information will appear. Metadata is also not always available on the Johns Hopkins site, but it is consistently available on the Tufts site. On the Tufts site there is a way to quickly scroll though all of the items and navigate through their data easily, but this was not available on the Johns Hopkins site.

    For both Dublin Core items and metadata information the standard information was available on both sites. As previously mentioned there are some items on the Johns Hopkins site where this information is missing but there is still space for it within the data information. This leads me to believe that both the Tufts and John Hopkins sites are both using the same plug ins as our class did on Dr. Hadden’s Omeka site.

  • Omeka Project 1

    Hello! In this post I will be explaining how I contributed two resources to Dr. Hadden’s Omeka site and using Dublin Core to categorize various data points.

    The two resources I will be contributing are the Picture of Dorian Gray and the Chicago Stock Exchange Trading Room Reconstruction.

    Step 1:

    First I select add a new item and the page below appears. Now I can start adding in the various Dublin Core data elements, I am starting with the Portrait of Dorian Gray.

    Step 2:

    Next I add in all of the Dublin Core elements that are asked.

    Step 3:

    Then I move on to the next page which is asking for the metadata of the resource.

    Step 4:

    Moving on to the next page, I add the image file for the Portrait of Dorian Gray.

    Step 5:

    Then I add tags to make my resource more accessible.

    Step 6:

    I then add the item to the collection and am able to view it among all of the other items from our class.

    Next I will repeat all of these steps for my second resource.

    Repeat step 2:

    Here I added all of the Dublin Core elements just as I did in step one for the Portrait of Dorian Gray but this time all of the material corresponds to the Chicago Stock Exchange Room Reconstruction.

    Repeat step 3:

    Then I went to the next page and added all of the metadata that corresponds to the Chicago Stock Exchange Room Reconstruction.

    Repeat step 4:

    On the next page I add the image that I want to attach to this resource.

    Repeat step 5:

    On the last page I add tags to make my resource easier to find.

    Repeat step 6:

    Lastly, I am able to view my second resource in the list of items from our class.