made a tool for your books from storygraph
I updated my bookshelf page to grab my current reads from my storygraph account. It uses the unholy combo of an html script, with a github automated action.
I found a (mostly broken) unofficial storygraph "api", but digging through the source its really an overly complicated html scraper that hasn't been updated to work with the sites current format, and so it returns a bunch of nulls. So I redid it and made my own. Result below.
(Also I highly recommend the Slough House series and the tv adaption (Slow Horses), its very faithful to the books. Aside's aside: Slough rhymes with cow.)
- Loading…
To get this working for yourself checkout the public repo.
To use you'll have to do a few things.
CloneFork the repo and make it public. (Dont just clone it, I dont want your book info)- On github actions, ensure that the main branch is selected. Also change the action settings to allow the bot to read/write. This is under Action settings -> general.
- Set two new secrets (under settings -> security -> secrets and variables -> actions). The first is STORYGRAPH_COOKIE, the second is STORYGRAPH_USERNAME. The cookie you get from logging in to your profile, hit F12, then go to storage. There will be a cookie called "remember_user_token", copy that bad boy over as the cookie secret. Do NOT hardcode it into the python file, someone could use it to login as you and takeover your account.
- Force a run of the action to verify its all good and current.json gets updated to your stuff.
- Setup a github "pages", the .io domain thing. Settings -> pages and select the branch you want.
- Reference the .json file and display it on bearblog (or dont Im not your dad)
To put it on bearblog you can do something like the below (what I did). This will display the books as covers with the title and author(s) below it. Not sure if the formatting will still work with a lot of books, but it looks fine for me with 4 books on desktop and mobile.
Anyways, enjoy!
-Bruce
Source for displaying:
## Currently Reading
<ul id="reading-list">
<li>Loading…</li>
</ul>
<script>
fetch("https://yourgithubusername.github.io/reading-data/current.json")
.then(r => r.json())
.then(data => {
document.getElementById("reading-list").innerHTML =
data.books
.map(b => `
<li class="book-item">
<img src="${b.cover}" alt="${b.title} by ${b.authors}" />
<div class="book-info">
<strong>${b.title}</strong>
<div class="author">${b.authors}</div>
</div>
</li>
`)
.join("");
})
.catch(() => {
document.getElementById("reading-list").innerHTML =
"<li>Unable to load reading list.</li>";
});
</script>
<style>
#reading-list {
display: flex;
gap: 16px;
flex-wrap: wrap;
list-style: none;
padding: 0;
margin: 0;
justify-content: center;
}
.book-item {
width: 140px;
text-align: center;
}
.book-item img {
width: 100%;
height: auto;
border-radius: 0; /* removed rounded corners */
box-shadow: 0 4px 10px rgba(0, 0, 0, 0.15);
}
.book-info {
margin-top: 8px;
font-size: 14px;
}
.author {
margin-top: 4px;
font-size: 13px;
}
</style>
How it works
The book info inside the html is kind of sporadic. And not consistent between books in a series and standalones. But what is consistent is the alt-text of the book covers. The script parses that into useful text, and also grabs the image's url.
A webcrawler will go to the user's profile (accessible only with the cookie, profiles arent public without a signin), grab the html, parse it, dump it in the json, add a commit and push it to main.
The crawler will run once a day at 0600 UTC, check for differences in the json, write if there was any changes, then exit.