Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
46
I’m embarking on a year-long crash course in the humanities. These are my notes for week 2. Following Ted Gioia’s curriculum, I tackled a small volume of early Greek poetry and the first ten books of the Odyssey. I also heard music from recent descendents of these ancient bards and saw a classic (and disturbing!) film. Readings I was apprehensive going in. I’ve read poems in four languages (English, Spanish, French, and Italian) and always struggled. Not this time. The volume I used was Greek Lyrics, translated by Richard Lattimore. It consists mostly of short poems and fragments. Standout poets: Theógenis, Alcaeus, Hybrias, Anácreon, and – especially – Sappho, whose love poems knocked my socks off. Image via Smithsonian Magazine The most engaging poems are the ones that deal with tangible subjects, such as love, produce, warfare, and the ocean, rather than godly exploits or abstractions. I was surprised at how mundane and relatable many are. For example, accruing wealth and power is a...
6 months ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Jorge Arango

Traction Heroes Ep. 9: Procrastination

Do you ever catch yourself avoiding things you need to do? Sure you, do: we all do it. In episode 9 of Traction Heroes, Harry and I discuss what to do about it. The conversation took off when Harry read a fragment from Oliver Burkeman’s book, Meditations for Mortals. I won’t cite the entire passage here, but this gives you a taste: It can be alarming to realize just how much of life gets shaped by what we’re actively trying to avoid. We talk about not getting around to things as if it were merely a failure of organization or a will. But often the truth is that we invest plenty of energy in making sure that we never get around to them. … The more you organize your life around not addressing things that make you anxious, the more likely they are to develop into serious problems. And even if they don’t, the longer you fail to confront them, the more unhappy time you spend being scared of what might be lurking in the places you don’t want to go. The irony, of course, is that we put off uncomfortable tasks because they make us anxious. But putting them off ultimately makes us more anxious. As Harry reminded us, “bad news doesn’t get better over time.” He also proposed a helpful framing: that facts are friendly. That is, even though knowing the truth might make us uncomfortable, knowing is better than not knowing. We discussed practical steps to gain traction: Ask yourself, what am I pretending not to know? Deep down, you know there’s more to the situation than you’ve let on; acknowledging the elephant in the room to move forward. Plan around the last responsible moment. Some events have fixed time windows; understand by when you must decide. Rewrite the narrative using the non-violent communication lens: separate your observations from interpretations, feelings, and needs. As always, I got lots of value from this conversation with Harry. But this one you can’t think about; it’s about doing. And doing is hard when the mind doesn’t want to face facts. Traction Heroes episode 9: Procrastination

2 months ago 14 votes
Humanities Crash Course Week 18: 1,001 Nights

In week 18 of the humanities crash course, I read five stories from One Thousand and One Nights, a collection of Middle Eastern folktales that have influenced lots of other stories. Keeping with the theme, I also saw one of the most influential movies based on these stories. Readings An influential collection of Middle Eastern folk tales compiled during the Islamic Golden Age. The framing device is brutally misogynistic: a sultan learns that his wife is unfaithful, so he executes her. He decides all women are the same, so he marries a new bride every day and has her executed the following day. Sheherazade asks her father, the vizier, to offer her in marriage to the sultan. The vizier is reluctant: they both know the wives’ fate. But Sheherazade has a clever plan: she starts a new story for the sultan every night but leaves it in a cliffhanger. Curious for the outcome, the sultan stays her execution to the next day. In this way, Sheherazade spares the lives of other maidens of the land. Of the many stories in the book, I read five recommended by Gioia: The Fisherman and the Jinni: a poor fisherman unwittingly unleashes a murderous jinni from a bottle, but tricks him back into the bottle by outwitting him. The Three Apples: an ancient murder mystery (again, centered on the murder of an innocent woman); the “solution” involves more unjust death (at least by our standards.) Sinbad the Sailor: a series of seven fantastical voyages involving monsters, magic, and stolen treasures; one of the voyages closely parallels the Cyclops episode from the Odyssey. Ali Baba and the Forty Thieves: another story of murder and ill-gotten treasure; a poor man discovers where a band of thieve stashes their loot and steals from them. Aladdin: a poor boy discovers a magic lamp that makes him wealthy and powerful, allowing him to marry a princess. These have been re-told in numerous guises. As often happens in these cases, the originals are much darker and bloodier than their spawn. These aren’t Disney versions, for sure. Audiovisual Music: Highlights from Tchaikovsky’s famous ballets plus Rimsky-Korsakov’s Sheherazade. I’d heard the ballets, but not the Rimsky-Korsakov. This piece reminded me of Paul Smith’s music for Disney’s 20,000 LEAGUES UNDER THE SEA (1954). Arts: Gioia recommended aboriginal Australian art. I’d seen works in this style, but hadn’t paid attention. This tradtion has a particular (and gorgeous) style that expresses strong connections to the land. I was surprised to learn about recent developments in this tradition. Cinema: Alexander Korba’s THE THIEF OF BAGDAD (1940), one of the many films ispired by the One Thousand and One Nights. While it now looks dated, this film was a special effects breakthrough. As an early example of Technicolor, it also features an over-the-top palette, much like it’s near-contemporary, THE WIZARD OF OZ. Reflections One can’t do justice to One Thousand and One Nights by only reading five stories. But the ones I read dealt with poor people being unfairly granted wealth and power. Escapist fantasies tend to stand the test of time. The “heroes” in the stories deserved as much comeuppance as the “villains.” For example, in Ali Baba and the Forty Thieves), one of the heroes commits a mass killing of the “bad guys” while they were unable to react. Not only does this go unpunished; it’s celebrated. The people who told these stories had moral standards different from our own. I also learned several stories — including some of the most famous, such as Ali Baba and the Forty Thieves and Aladdin — were not part of the original collection. Instead, they were added by a French translator in the 18th Century. This was frustrating, as they weren’t present in the collection I bought; I had to seek them out separately. So, this week, I’ve been pondering questions of authorship and derivation. We don’t know who originated these stories. Like the aboriginal Australian art, the stories in the One Thousand and One Nights emerged — and belong to — a people more than an individual author or artist. And yet, they’ve inspired other works, such as THE THIEF OF BAGDAD — which inspired Disney’s ALADDIN. (The latter “borrows” liberally from the former.) Is it any wonder I heard Rimsky-Korsakov in the 20k score? At this point, I assume at least some cross-pollination — after all, Rimsky-Korsakov himself was inspired by the One Thousand and One Nights. This is how art has always evolved: artists build on what’s come before. In some cases, the inspiration is obvious. In others, it’s more nebulous. Did Odysseus inspire Simbad? Or did they both retell older stories? The process changed in the 20th Century. With strong copyright laws, stories become intellectual property. Disney may build on the One Thousand and One Nights stories, but we can’t build on Disney’s stories. And it’s changing again with large language models. It will be interesting to see how these new tools allow us to retell old stories in new ways. At a minimum, they’re causing us to reevaluate our approach to IP. Notes on Note-taking A realization: my Obsidian knowledge repository is better suited to reflecting on text than other media. I can try to write down my impressions of the beautiful aboriginal art and Rimsky-Korsakov’s music. But words articulate concepts, not feelings — even when trying to articulate feelings. So I end up reflecting on abstract ideas such as authorship and derivation rather than the nature of the works. It’s a limitation of my current note-taking system, and one I can’t do much about. Perhaps ChatGPT can help by letting me riff on pictures and sounds? But there, too, communication happens through language. Up Next Gioia recommends the Bhagavad Gita, the Rule of St. Benedict, and the first two books of Saint Augustine’s Confessions. This will be my first time with any of them. Again, there’s a YouTube playlist for the videos I’m sharing here. I’m also sharing these posts via Substack if you’d like to subscribe and comment. See you next week!

2 months ago 12 votes
Local GraphRAG: A Progress Report

The dream is running GraphRAG with locally-hosted LLMs. And at least for now, the dream is on hold for me. In case you missed it, GraphRAG is a way of getting more useful results with LLMs by working with data you provide (in addition to whatever they’ve trained on.) The system uses LLMs to build a knowledge graph from documents you provide and then uses those graphs to power RAG queries. This opens lots of possibilities. For information architecture work, it lets you ask useful questions of your own content. I’ve written about my experiments in that scenario. In that case, I used OpenAI’s models to power Microsoft’s GraphRAG application. But I’m especially excited about the possibilities for personal knowledge management. Imagine an LLM tuned to and focused on your personal notes, journals, calendars, etc. That’s primarily why I’m dreaming of GraphRAG powered by local models. There are several reasons why local models would be preferable. For one, there’s the cost: GraphRAG indexing runs are expensive. There’s also a privacy angle. Yes, I’ve told OpenAI I don’t want them to train their models using my data, but some of this stuff is extremely personal and I’m not comfortable with it leaving my computer at all. But an even larger concern is dependency. I’m building a lifelong thinking assistant. (An amanuensis, as I outlined in Duly Noted.) It’s risky to delegate such a central part of this system to a party that could turn off the spigot at any time. So I’ve been experimenting with graphrag using local models. There are good news and bad news. Before I tell you about them, let me explain my setup. I’m using a 16” 2023 M2 Max MacBook Pro with 32GB of RAM. It’s not an entry-level machine, but not a monster either. I’m using ollama to run local models. I’ve tried around half a dozen at this point and have successfully set up one automated (non-GraphRAG) workflow using mistral-small3.1. GraphRAG is extremely flexible. There are dozens of parameters to configure, including different LLMs for each step in the process. Off-the-shelf, its prompts are optimized specifically for GPT-4-turbo; other models require tweaking. Indexing runs (where the model converts texts to knowledge graphs) can take a long time. So tweaks are time-consuming. I’ve had a go at it several times, but given up after a bit. I don’t have much free time these days, and most experiments have unsuccessfully ended with failed (and long!) indexing runs. But a few things have changed in recent weeks: GraphRAG itself keeps evolving There are now more powerful small local models that run better within my machine’s limitations ChatGPT o3 came out That last one may sound like a non-sequitur. Aren’t I trying to get away from cloud-hosted models for this use case? Well, yes — but in this case, I’m not using o3 to power GraphRAG. Instead, I’m using it to help me debug failed runs. While certainly nothing like AGI, as some have claimed, o3 has proven to be excellent for dealing with the sort of tech-related issues that would’ve sent me off to Stack Overflow in the past. Debugging GraphRAG runs is one such task. I’ve been feeding o3 logfiles after each run, and it’s recommended helpful tweaks. It’s the single most important factor in my recent progress. Yes, there’s been some progress: yesterday, after many tries, I finally got two local models to successfully complete an indexing run. Mind you, that doesn’t mean I can yet successfully query GraphRAG. But finishing the indexing run without issues is progress. That’s the good news. Alas, the indexing run took around thirty-six hours to process nineteen relatively short Markdown files. To put that in perspective, the same indexing run using cloud-hosted models would likely have taken under ten minutes. My machine also ran at full throttle the whole time. (It’s the first time I’ve felt an M-series Mac get hot.) The reduced processing speed isn’t just because the models themselves are slower: it’s also due to my machine’s limitations. After analyzing the log files, ChatGPT suggested reducing the number of concurrent API calls. The successful run specified just one call at a time for both models. The upshot is that even though the indexing run finished successfully, this process is impractical for real-world use. My PKM has thousands of Markdown files. ChatGPT keeps suggesting further tweaks, but progress is frustratingly slow when cycles are measured in days. I’ve considered upgrading to a MBP with more RAM or increasing the number of concurrent processes to find the upper threshold for my machine. But based on these results, I suspect improvements will be marginal given the amount of data I’m looking to process. So that’s the bad news. For now, I’ll keep working with local models for other uses (such as OCRing handwritten notes; the workflow I alluded to above. More on that soon!) And of course, I’ll continue experimenting with cloud-based models for other use cases. In any case, I’ll share what I learn here.

2 months ago 14 votes
Humanities Crash Course Week 17: Curiositas

In week 17 of the humanities crash course, I read a book that was completely new to me: Apuleius’s Metamorphoses, better known as The Golden Ass. I also watched a movie with a similar story (but with different aims.) Readings The Golden Ass was written by Apuleius around the second century CE. The only complete Latin novel to survive, it tells the story of Lucius, a man whose reckless curiositas leads him to accidentally be transformed into an ass. (What is curiositas, you ask? Read on…) As a donkey, Lucius goes from owner to owner, exposing him to dangers, adventure, and gossip. Characters tell several sub-stories, mostly about crime, infidelity, and magic. The most famous is the story of Cupid and Psyche, a cautionary allegory that echoes the themes and structures of the novel as a whole. Throughout his wanderings, Lucius is treated brutally. At one point a woman falls in love with him and treats him as a sex object. Eventually, the goddess Isis brings him back to human form after an initiation into her cult. He becomes an acolyte, making the story a metaphor for religious conversion. The final section of the book, where Lucius undergoes his spiritual transformation, is one of several surprising tone shifts: the book is sometimes drama, horror, fairy tale, and bawdy farce. Overall, it gives an entertaining picture of moral codes in second century Europe. Audiovisual Music: Scott Joplin. Again, a composer whose work was familiar to me. Rather than the usual piano solo versions, I listened to a recording of his works featuring Andre Previn on piano and Itzhak Perlman on violin. Arts: van Gogh, who, like Joplin, is overly familiar. This lecture from The National Gallery helped put his work in context: I hadn’t realized the degree to which van Gogh’s paintings are the result of a tech innovation: synthetic pigments in the newly invented roll-up tubes. As always, understanding context is essential. Cinema: Jerzy Skolimowski’s EO, a road picture that follows a donkey as he drifts through the Polish and Italian countrysides. Like Lucius, he’s exposed to humanity’s moral failings (and a tiny bit of tenderness.) While visually and aurally stunning, I found the movie overbearingly preachy. Reflections As usual, I entered my reflections on the book into ChatGPT to ask for what I might have missed or gotten wrong. My notes said Lucius’s curiosity about witchcraft led him to be transformed into an ass. ChatGPT corrected me: it wasn’t curiosity but curiositas. I asked for clarification, since the two terms are so similar. As I now understand it, curiositas refers to “an immoderate appetite for forbidden or frivolous knowledge that distracts from real duties” — i.e., wasting time on B.S. of the sort one finds in tabloids or chasing after forbidden knowledge. ChatGPT suggested as contemporary equivalents clickbait and doomscrolling, gossip culture (think the Kardashians), and “risk-blind experimentation” — i.e., the “move fast and break things” ethos — as the LLM put it, a “reckless desire to test the limits without counting the costs.” In other words, Lucius wasn’t punished (and ultimately disciplined) because he was curious. Instead, he “messed around and found out” — literally making an ass out of himself. For the ancients, the healthy opposite was studiositas, a “disciplined study in service of truth.” We’ll spend time with Thomas Aquinas later in the course; ChatGPT suggests he makes much of this distinction. Notes on Note-taking Last week, I said I’d return to ChatGPT 4o for its responsiveness. I haven’t; the o3 model’s results are better enough that the slightly longer wait is worth it. That said, I remain disappointed with o3’s preference for tables. One good sign: at one point, ChatGPT presented me with a brief A/B test where it asked me to pick between a table-based result and one with more traditional prose. Of course, I picked the latter. I hope they do away with the tables, or at least make them much less frequent. Up Next Gioia recommends selected readings from The Arabian Nights. While I’ve never read the original, several of these stories (Aladdin, Sinbad) are familiar through reinterpretations. I’m looking forward to reading the originals. Again, there’s a YouTube playlist for the videos I’m sharing here. I’m also sharing these posts via Substack if you’d like to subscribe and comment. See you next week!

2 months ago 13 votes
Traction Heroes Ep. 8: Quagmires

There’s a lot of turbulence in the world. What is the source of the turbulence? And how can we navigate skillfully? These questions were on my mind as I met with Harry to record episode 8 of the Traction Heroes podcast. My (at least partial) answer to the first question is that there’s a general lack of systems literacy in the world. Most people aren’t aware of the high degree of complexity that characterizes highly intertwingled systems such as modern economies. As a result, they opt for simplistic interventions that often do more harm than good. At least that was my hypothesis. I was keen to hear Harry’s thoughts — and he didn’t disappoint. My prompt was the following passage from Donella Meadows’s classic Thinking in Systems: A Primer (emphasis in the original): Ever since the Industrial Revolution, Western society has benefited from science, logic, and reductionism over intuition and holism. Psychologically and politically we would much rather assume that the cause of a problem is “out there,” rather than “in here.” It’s almost irresistible to blame something or someone else, to shift responsibility away from ourselves, and to look for the control knob, the product, the pill, the technical fix that will make a problem go away. Serious problems have been solved by focusing on external agents—preventing smallpox, increasing food production, moving large weights and many people rapidly over long distances. Because they are embedded in larger systems, however, some of our “solutions” have created further problems. And some problems, those most rooted in the internal structure of complex systems, the real messes, have refused to go away. Hunger, poverty, environmental degradation, economic instability, unemployment, chronic disease, drug addiction, and war, for example, persist in spite of the analytical ability and technical brilliance that have been directed toward eradicating them. No one deliberately creates those problems, no one wants them to persist, but they persist nonetheless. That is because they are intrinsically systems problems—undesirable behaviors characteristic of the system structures that produce them. They will yield only as we reclaim our intuition, stop casting blame, see the system as the source of its own problems, and find the courage and wisdom to restructure it. Of course, the broader context was (and is) on my mind. But we’re all enmeshed in complex systems in our day-to-day lives. It behooves us to ponder whether the causes of problems are really “out there” — or whether, as Harry suggested, we need to be more introspective. Traction Heroes ep. 8: Quagmires

2 months ago 33 votes

More in technology

You should repaste your MacBook (but don't)

My favorite memory of my M1 Pro MacBook Pro was the whole sensation of “holy crap, you never hear the fans in this thing”, which was very novel in 2021. Four years later, this MacBook Pro is still a delight. It’s the longest I’ve ever owned a laptop, and while I’d love to pick up the new M4 goodness, this dang thing still seems to just shrug at basically anything I throw at it. Video editing, code compiling, CAD models, the works. (My desire to update is helped though by the fact I got the 2TB SSD, 32GB RAM option, and upgrading to those on new MacBooks is still eye wateringly expensive.) But my MacBook is starting to show its age in one area: it’s not quiet anymore. If you’re doing anything too intensive like compiling code for awhile, or converting something in Handbrake, the age of the fans being quiet is long past. The fans are properly loud. (And despite having two cats, it’s not them! I clean out the fans pretty regularly.) Enter the thermal paste Everyone online seems to point toward one thing: the thermal paste on computers tends to dry up over the years. What the heck is thermal paste? Well, components on your computer that generate a lot of heat are normally made to touch something like a copper heatsink that is really good at pulling that heat away from it. The issue is, when you press these two metal surfaces against each other, even the best machining isn’t perfect and you there’s microscopic gaps between them meaning there’s just air at those parts, and air is a terrible conductor of heat. The solution is to put a little bit of thermal paste (basically a special grey toothpaste gunk that is really good at transferring heat) between them, and it fills in any of those microscopic gaps. The problem with this solution is after hundreds and hundreds of days of intense heat, the paste can dry up into something closer to almost a powder, and it’s not nearly as good at filling in those gaps. Replacement time The logic board! MacBook thermal paste isn’t anything crazy (for the most part, see below), custom PC builders use thermal paste all the time so incredibly performant options are available online. I grabbed a tube of Noctua NT-H2 for about $10 and set to taking apart my MacBook to swap out the aging thermal paste. And thankfully, iFixit has a tremendous, in depth guide on the disassembly required, so I got to it. Indeed, that grey thermal paste looked quite old, but also above and below it (on the RAM chips) I noticed something that didn’t quite seem like thermal paste, it was far more… grainy almost? Spottiness is due to half of it being on the heatsink It turns out, ending with my generation of MacBooks (lucky me!) Apple used a very special kind of thermal compound often called “Carbon Black”, which is basically designed to be able to bridge an even thicker gap than traditional thermal paste. I thought about replacing it, but it seems really hard to come across that special thermal compound (and do not do it with normal thermal paste) and my RAM temperatures always seemed fine (65°C is fine… right?) so I just made sure to not touch that. For the regular grey thermal paste, I used some cotton swabs and isopropyl alcohol to remove the dried up existing thermal paste, then painted on a bit of the new stuff. Disaster To get to the underside of the CPU, you basically need to disassemble the entire MacBook. It’s honestly not that hard, but iFixit warned that the fan cables (which also need to be unclipped) are incredibly delicate. And they’re not wrong, seriously they have the structural integrity of the half-ply toilet paper available at gas stations. So, wouldn’t you know it, I moved the left fan’s cable a bit too hard and it completely tore in half. Gah. I found a replacement fan online (yeah you can’t just buy the cable, need a whole new fan) and in the meantime I just kept an eye on my CPU thermals. As long as I wasn’t doing anything too intensive it honestly always stayed around 65° which was warm, but not terrifying (MacBook Airs completely lack a fan, after all). Take two A few days later, the fans arrived, and I basically had to redo the entire disassembly process to get to the fans. At least I was a lot faster this time. The fan was incredibly easy to swap out (hats off there, Apple!) and I screwed everything back together and began reconnecting all the little connectors. Until I saw it: the tiny (made of the same half ply material as the fan cable) Touch ID sensor cable was inexpicably torn in half, the top half just hanging out. I didn’t even half to touch this thing really, and I hadn’t even got to the stage of reconnecting it (I was about to!), it comes from underneath the logic board and I guess just the movement of sliding the logic board back in sheared it in half. me Bah. I looked up if I could just grab another replacement cable here, and sure enough you can… but the Touch ID chip is cryptographically paired to your MacBook so you’d have to take it into an Apple Store. Estimates seemed to be in the hundreds of dollars, so if anyone has any experience there let me know, but for now I’m just going to live happily without a Touch ID sensor… or the button because the button also does not work. RIP little buddy (And yeah I’m 99.9% sure I can’t solder this back together, there’s a bunch of tiny lanes that make up the cable that you would need experience with proper micro-soldering to do.) Honestly, the disassembly process for my MacBook was surprisingly friendly and not very difficult, I just really wish they beefed up some of the cables even slightly so they weren’t so delicate. The results I was going to cackle if I went through all that just to have identical temperatures as before, but I’m very happy to say they actually improved a fair bit. I ran a Cinebench test before disassembling the MacBook the very first time to establish a baseline: Max CPU temperature: 102°C Max fan speed: 6,300 RPM Cinbench score: 12,252 After the new thermal paste (and the left fan being new): Max CPU temperature: 96°C Max fan speed: 4,700 RPM Cinbench score: 12,316 Now just looking at those scores you might be like… so? But let me tell you, dropping 1,600 RPM on the fan is a noticeable change, it goes from “Oh my god this is annoyingly loud” to “Oh look the fans kicked in”, and despite slower fan speeds there was still a decent drop in CPU temperature! And a 0.5% higher Cinebench score! But where I also really notice it is in idling: just writing this blog post my CPU was right at 46°C the whole time, where previously my computer idled right aroud 60°C. The whole computer just feels a bit healthier. So… should you do it? Honestly, unless you’re very used to working on small, delicate electronics, probably not. But if you do have that experience and are very careful, or have a local repair shop that can do it for a reasonable fee (and your MacBook is a few years old so as to warrant it) it’s honestly a really nice tweak that I feel will hopefully at least get me to the M5 generation. I do miss Touch ID, though.

2 days ago 5 votes
Six Game Devs Speak to Computer Games Mag (1984)

Meet the Creators of Choplifter, Wizardry, Castle Wolfenstein, Zaxxon, Canyon Climber, and the Arcade Machine

2 days ago 4 votes
New AWS x Arduino Opta Workshop: Connect your PLC to the Cloud in just a few steps

We’re excited to invite you to a brand-new workshop created in collaboration with Amazon Web Services (AWS). Whether you’re modernizing factory operations or tinkering with your first industrial project, this hands-on workshop is your gateway to building cloud-connected PLCs that ship data – fast. At Arduino, we believe in making advanced technology more accessible. That’s […] The post New AWS x Arduino Opta Workshop: Connect your PLC to the Cloud in just a few steps appeared first on Arduino Blog.

2 days ago 3 votes
The History of Acer

A Shy Kid Builds the Taiwanese Tech Industry

5 days ago 11 votes
Concept Bytes’ coffee table tracks people and walks itself across a room when called

The term “mmWave” refers to radio waves with wavelengths on the millimeter scale. When it comes to wireless communications technology, like 5G, mmWave allows for very fast data transfer — though that comes at the expense of range. But mmWave technology also has some very useful sensing and scanning applications, which you may have experienced […] The post Concept Bytes’ coffee table tracks people and walks itself across a room when called appeared first on Arduino Blog.

5 days ago 8 votes