So I managed to scrape and analyze my first chunk of html from a live website using this handy tutorial.
I installed and learned how to use the lxml and requests python libraries, and learned how to copy and use the Xpath of an html element. lxml, combined with requests, lets you request the full html from a given url and then structure it into a data tree that you can traverse.
You can then inspect an element using developer tools built into a web browser and copy it’s Xpath. You can reference the Xpath of a particular tag class to pull any data with that given given tag from the data tree you created and put it into a handy list object.
When I tried this with my first webcomic however, I got an error when requests tried to decode the html. Apparently dealing with html encoding from site to site is something you have to learn to deal with as a professional, and my next research topic. After I get that sorted I should be able to write code to scrape what I want scraped, and then I can learn how to insert it into a cron job that runs the scrape automatically on a schedule and my project will be finished!