Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
29
This post is co-written by Greg Brockman (left) and Ilya Sutskever (right). We’ve been working on OpenAI for the past three years. Our mission is to ensure that artificial general intelligence (AGI) — which we define as automated systems that outperform humans at most economically valuable work — benefits all of humanity. Today we announced a new legal structure for OpenAI, called OpenAI LP, to better pursue this mission — in particular to raise more capital as we attempt to build safe AGI and distribute its benefits. In this post, we’d like to help others understand how we think about this mission. Why now? # The founding vision of the field of AI was “… to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it”, and to eventually build a machine that thinks — that is, an AGI. But over the past 60 years, progress stalled multiple times and people...
over a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Greg Brockman

It's time to become an ML engineer

AI has recently crossed a utility threshold, where cutting-edge models such as GPT-3, Codex, and DALL-E 2 are actually useful and can perform tasks computers cannot do any other way. The act of producing these models is an exploration of a new frontier, with the discovery of unknown capabilities, scientific progress, and incredible product applications as the rewards. And perhaps most exciting for me personally, because the field is fundamentally about creating and studying software systems, great engineers are able to contribute at the same level as great researchers to future progress. “A self-learning AI system.” by DALL-E 2. I first got into software engineering because I wanted to build large-scale systems that could have a direct impact on people’s lives. I attended a math research summer program shortly after I started programming, and my favorite result of the summer was a scheduling app I built for people to book time with the professor. Specifying every detail of how a program should work is hard, and I’d always dreamed of one day putting my effort into hypothetical AI systems that could figure out the details for me. But after taking one look at the state of the art in AI in 2008, I knew it wasn’t going to work any time soon and instead started building infrastructure and product for web startups. DALL-E 2’s rendition of “The two great pillars of the house of artificial intelligence” (which according to my co-founder Ilya Sutskever are great engineering, and great science using this engineering) It’s now almost 15 years later, and the vision of systems which can learn their own solutions to problems is becoming incrementally more real. And perhaps most exciting is the underlying mechanism by which it’s advancing — at OpenAI, and the field generally, precision execution on large-scale models is a force multiplier on AI progress, and we need more people with strong software skills who can deliver these systems. This is because we are building AI models out of unprecedented amounts of compute; these models in turn have unprecedented capabilities, we can discover new phenomena and explore the limits of what these models can and cannot do, and then we use all these learnings to build the next model. “Harnessing the most compute in the known universe” by DALL-E 2 Harnessing this compute requires deep software skills and the right kind of machine learning knowledge. We need to coordinate lots of computers, build software frameworks that allow for hyperoptimization in some cases and flexibility in others, serve these models to customers really fast (which is what I worked on in 2020), and make it possible for a small team to manage a massive system (which is what I work on now). Engineers with no ML background can contribute from the day they join, and the more ML they pick up the more impact they have. The OpenAI environment makes it relatively easy to absorb the ML skills, and indeed, many of OpenAI’s best engineers transferred from other fields. All that being said, AI is not for every software engineer. I’ve seen about a 50-50 success rate of engineers entering this field. The most important determiner is a specific flavor of technical humility. Many dearly-held intuitions from other domains will not apply to ML. The engineers who make the leap successfully are happy to be wrong (since it means they learned something), aren’t afraid not to know something, and don’t push solutions that others resist until they’ve gathered enough intuition to know for sure that it matches the domain. “A beaver who has humbly recently become a machine learning engineer” by DALL-E 2 I believe that AI research is today by far the most impactful place for engineers who want to build useful systems to be working, and I expect this statement to become only more true as progress continues. If you’d like to work on creating the next generation of AI models, email me (gdb@openai.com) with any evidence of exceptional accomplishment in software engineering.

over a year ago 33 votes
How I became a machine learning practitioner

For the first three years of OpenAI, I dreamed of becoming a machine learning expert but made little progress towards that goal. Over the past nine months, I’ve finally made the transition to being a machine learning practitioner. It was hard but not impossible, and I think most people who are good programmers and know (or are willing to learn) the math can do it too. There are many online courses to self-study the technical side, and what turned out to be my biggest blocker was a mental barrier — getting ok with being a beginner again. Studying machine learning during the 2018 holiday season. Early days # A founding principle of OpenAI is that we value research and engineering equally — our goal is to build working systems that solve previously impossible tasks, so we need both. (In fact, our team is comprised of 25% people primarily using software skills, 25% primarily using machine learning skills, and 50% doing a hybrid of the two.) So from day one of OpenAI, my software skills were always in demand, and I kept procrastinating on picking up the machine learning skills I wanted. After helping build OpenAI Gym, I was called to work on Universe. And as Universe was winding down, we decided to start working on Dota — and we needed someone to turn the game into a reinforcement learning environment before any machine learning could begin. Dota # Turning such a complex game into a research environment without source code access was awesome work, and the team’s excitement every time I overcame a new obstacle was deeply validating. I figured out how to break out of the game’s Lua sandbox, LD_PRELOAD in a Go GRPC server to programmatically control the game, incrementally dump the whole game state into a Protobuf, and build a Python library and abstractions with future compatibility for the many different multiagent configurations we might want to use. But I felt half blind. At Stripe, though I gravitated towards infrastructure solutions, I could make changes anywhere in the stack since I knew the product code intimately. In Dota, I was constrained to looking at all problems through a software lens, which sometimes meant I tried to solve hard problems that could be avoided by just doing the machine learning slightly differently. I wanted to be like my teammates Jakub Pachocki and Szymon Sidor, who had made the core breakthrough that powered our Dota bot. They had questioned the common wisdom within OpenAI that reinforcement algorithms didn’t scale. They wrote a distributed reinforcement learning framework called Rapid and scaled it exponentially every two weeks or so, and we never hit a wall with it. I wanted to be able to make critical contributions like that which combined software and machine learning skills. Szymon on the left; Jakub on the right. In July 2017, it looked like I might have my chance. The software infrastructure was stable, and I began work on a machine learning project. My goal was to use behavioral cloning to teach a neural network from human training data. But I wasn’t quite prepared for just how much I would feel like a beginner. I kept being frustrated by small workflow details which made me uncertain if I was making progress, such as not being certain which code a given experiment had used or realizing I needed to compare against a result from last week that I hadn’t properly archived. To make things worse, I kept discovering small bugs that had been corrupting my results the whole time. I didn’t feel confident in my work, but to make it worse, other people did. People would mention how how hard behavioral cloning from human data is. I always made sure to correct them by pointing out that I was a newbie, and this probably said more about my abilities than the problem. It all briefly felt worth it when my code made it into the bot, as Jie Tang used it as the starting point for creep blocking which he then fine-tuned with reinforcement learning. But soon Jie figured out how to get better results without using my code, and I had nothing to show for my efforts. I never tried machine learning on the Dota project again. Time out # After we lost two games in The International in 2018, most observers thought we’d topped out what our approach could do. But we knew from our metrics that we were right on the edge of success and mostly needed more training. This meant the demands on my time had relented, and in November 2018, I felt I had an opening to take a gamble with three months of my time. Team members in high spirits after losing our first game at The International. I learn best when I have something specific in mind to build. I decided to try building a chatbot. I started self-studying the curriculum we developed for our Fellows program, selecting only the NLP-relevant modules. For example, I wrote and trained an LSTM language model and then a Transformer-based one. I also read up on topics like information theory and read many papers, poring over each line until I fully absorbed it. It was slow going, but this time I expected it. I didn’t experience flow state. I was reminded of how I’d felt when I just started programming, and I kept thinking of how many years it had taken to achieve a feeling of mastery. I honestly wasn’t confident that I would ever become good at machine learning. But I kept pushing because… well, honestly because I didn’t want to be constrained to only understanding one part of my projects. I wanted to see the whole picture clearly. My personal life was also an important factor in keeping me going. I’d begun a relationship with someone who made me feel it was ok if I failed. I spent our first holiday season together beating my head against the machine learning wall, but she was there with me no matter how many planned activities it meant skipping. One important conceptual step was overcoming a barrier I’d been too timid to do with Dota: make substantive changes to someone else’s machine learning code. I fine-tuned GPT-1 on chat datasets I’d found, and made a small change to add my own naive sampling code. But it became so painfully slow as I tried to generate longer messages that my frustration overwhelmed my fear, and I implemented GPU caching — a change which touched the entire model. I had to try a few times, throwing out my changes as they exceeded the complexity I could hold in my head. By the time I got it working a few days later, I realized I’d learned something that I would have previously thought impossible: I now understood how the whole model was put together, down to small stylistic details like how the codebase elegantly handles TensorFlow variable scopes. Retooled # After three months of self-study, I felt ready to work on an actual project. This was also the first point where I felt I could benefit from the many experts we have at OpenAI, and I was delighted when Jakub and my co-founder Ilya Sutskever agreed to advise me. Ilya singing karaoke at our company offsite. We started to get very exciting results, and Jakub and Szymon joined the project full-time. I feel proud every time I see a commit from them in the machine learning codebase I’d started. I’m starting to feel competent, though I haven’t yet achieved mastery. I’m seeing this reflected in the number of hours I can motivate myself to spend focused on doing machine learning work — I’m now around 75% of the number of coding hours from where I’ve been historically. But for the first time, I feel that I’m on trajectory. At first, I was overwhelmed by the seemingly endless stream of new machine learning concepts. Within the first six months, I realized that I could make progress without constantly learning entirely new primitives. I still need to get more experience with many skills, such as initializing a network or setting a learning rate schedule, but now the work feels incremental rather than potentially impossible. From our Fellows and Scholars programs, I’d known that software engineers with solid fundamentals in linear algebra and probability can become machine learning engineers with just a few months of self study. But somehow I’d convinced myself that I was the exception and couldn’t learn. But I was wrong — even embedded in the middle of OpenAI, I couldn’t make the transition because I was unwilling to become a beginner again. You’re probably not an exception either. If you’d like to become a deep learning practitioner, you can. You need to give yourself the space and time to fail. If you learn from enough failures, you’ll succeed — and it’ll probably take much less time than you expect. At some point, it does become important to surround yourself by existing experts. And that is one place where I’m incredibly lucky. If you’re a great software engineer who reaches that point, keep in mind there’s a way you can be surrounded by the same people as I am — apply to OpenAI!

over a year ago 33 votes
OpenAI Five Finals Intro

The text of my speech introducing OpenAI Five at Saturday’s OpenAI Five Finals event, where our AI beat the world champions at Dota 2: “Welcome everyone. This is an exciting day. First, this is an historic moment: this will be the first time that an AI has even attempted to play the world champions in an esports game. OG is simply on another level relative to other teams we’ve played. So we don’t know what’s going to happen, but win or lose, these will be games to remember. And you know, OpenAI Five and DeepMind’s very impressive StarCraft bot This event is really about something bigger than who wins or loses: letting people connect with the strange, exotic, yet tangible intelligences produced by today’s rapidly progressing AI technology. We’re all used to computer programs which have been meticulously coded by a human programmer. Do one thing that the human didn’t anticipate, and the program will break. We think of our computers as unthinking machines which can’t innovate, can’t be creative, can’t truly understand. But to play Dota, you need to do all these things. So we needed to do something different. OpenAI Five is powered by deep reinforcement learning — meaning that we didn’t code in how to play Dota. We instead coded in the how to learn. Five tries out random actions, and learns from a reward or punishment. In its 10 months of training, its experienced 45,000 years of Dota gameplay against itself. The playstyle it has devised are its own — they are truly creative and dreamed up by our computer — and so from Five’s perspective, today’s games are going to its first encounter with an alien intelligence (no offense to OG!). The beauty of this technology is that our learning code doesn’t know it’s meant for Dota. That makes it general purpose with amazing potential to benefit our lives. Last year we used it to control a robotic hand that no one could program. And we expect to see similar technology in new interactive systems, from elderly care robots to creative assistants to other systems we can’t dream of yet. This is the final public event for OpenAI Five, but we expect to do other Dota projects in the future. I want to thank the incredible team at OpenAI, everyone who worked directly on this project or cheered us on. I want to thank those who have supported the project: Valve, dozens of test teams, today’s casters, and yes, even all the commenters on Reddit. And I want to give massive thanks today to our fantastic guests OG who have taken time out of their tournament schedule to be here today. I hope you enjoy the show — and just to keep things in perspective, no matter how surprising the AIs are to us, know that we’re even more surprising to them!”

over a year ago 32 votes
OpenAI Five intro

The text of my speech introducing OpenAI Five at yesterday’s Benchmark event: “We’re here to watch humans and AI play Dota, but today’s match will have implications for the world. OpenAI’s mission is to ensure that when we can build machines as smart as humans, they will benefit all of humanity. That means both pushing the limits of what’s possible and ensuring future systems are safe and aligned with human values. We work on Dota because it is a great training ground for AI: it is one of the most complicated games, involving teamwork, real time strategy, imperfect information, and an astronomical combinations of heroes and items. We can’t program a solution, so Five learns by playing 180 years of games against itself every day — sadly that means we can’t learn from the players up here unless they played for a few decades. It’s powered by 5 artificial neural networks which act like an artificial intuition. Five’s neural networks are about the size of the brain of an ant — still far from what we all have in our heads. One year ago, we beat the world’s top professionals at 1v1 Dota. People thought 5v5 would be totally out of reach. 1v1 requires mechanics and positioning; people did not expect the same system to learn strategy. But our AI system can learn problems it was not even designed to solve — we just used the same technology to learn to control a robotic hand — something no one could program. The computational power for OpenAI Five would have been impractical two years ago. But the availability of computation for AI has been increasing exponentially, doubling every 3.5 months since 2012, and one day technologies like this will become commonplace. Feel free to root for either team. Either way, humanity wins.” I’m very excited to see where the upcoming months of OpenAI Five development and testing take us.

over a year ago 31 votes

More in programming

Seeing the Matrix: A First-Principles Approach to Computer Architecture

Building a mental model of computer architecture from first principles

2 hours ago 2 votes
The blissful zen of a good side project

One of life's greatest simple pleasures is creating something just for yourself.

yesterday 5 votes
How to resource Engineering-driven projects at Calm? (2020)

One of the recurring challenges in any organization is how to split your attention across long-term and short-term problems. Your software might be struggling to scale with ramping user load while also knowing that you have a series of meaningful security vulnerabilities that need to be closed sooner than later. How do you balance across them? These sorts of balance questions occur at every level of an organization. A particularly frequent format is the debate between Product and Engineering about how much time goes towards developing new functionality versus improving what’s already been implemented. In 2020, Calm was growing rapidly as we navigated the COVID-19 pandemic, and the team was struggling to make improvements, as they felt saturated by incoming new requests. This strategy for resourcing Engineering-driven projects was our attempt to solve that problem. This is an exploratory, draft chapter for a book on engineering strategy that I’m brainstorming in #eng-strategy-book. As such, some of the links go to other draft chapters, both published drafts and very early, unpublished drafts. Reading this document To apply this strategy, start at the top with Policy. To understand the thinking behind this strategy, read sections in reverse order, starting with Explore. More detail on this structure in Making a readable Engineering Strategy document. Policy & Operation Our policies for resourcing Engineering-driven projects are: We will protect one Eng-driven project per product engineering team, per quarter. These projects should represent a maximum of 20% of the team’s bandwidth. Each project must advance a measurable metric, and execution must be designed to show progress on that metric within 4 weeks. These projects must adhere to Calm’s existing Engineering strategies. We resource these projects first in the team’s planning, rather than last. However, only concrete projects are resourced. If there’s no concrete proposal, then the team won’t have time budgeted for Engineering-driven work. Team’s engineering manager is responsible for deciding on the project, ensuring the project is valuable, and pushing back on attempts to defund the project. Project selection does not require CTO approval, but you should escalate to the CTO if there’s friction or disagreement. CTO will review Engineering-driven projects each quarter to summarize their impact and provide feedback to teams’ engineering managers on project selection and execution. They will also review teams that did not perform a project to understand why not. As we’ve communicated this strategy, we’ve frequently gotten conceptual alignment that this sounds reasonable, coupled with uncertainty about what sort of projects should actually be selected. At some level, this ambiguity is an acknowledgment that we believe teams will identify the best opportunities bottoms-up, we also wanted to give two concrete examples of projects we’re greenlighting in the first batch: Code-free media release: historically, we’ve needed to make a number of pull requests to add, organize, and release new pieces of media. This is high urgency work, but Engineering doesn’t exercise much judgment while doing it, and manual steps often create errors. We aim to track and eliminate these pull requests, while also increasing the number of releases that can be facilitated without scaling the content release team. Machine-learning content placement: developing new pieces of media is often a multi-week or month process. After content is ready to release, there’s generally a debate on where to place the content. This matters for the company, as this drives engagement with our users, but it matters even more to the content creator, who is generally evaluated in terms of their content’s performance. This often leads to Product and Engineering getting caught up in debates about how to surface particular pieces of content. This project aims to improve user engagement by surfacing the best content for their interests, while also giving the Content team several explicit positions to highlight content without Product and Engineering involvement. Although these projects are similar, it’s not intended that all Engineering-driven projects are of this variety. Instead it’s happenstance based on what the teams view as their biggest opportunities today. Diagnosis Our assessment of the current situation at Calm is: We are spending a high percentage of our time on urgent but low engineering value tasks. Most significantly, about one-third of our time is going into launching, debugging, and changing content that we release into our product. Engineering is involved due to limitations in our implementation, not because there is any inherent value in Engineering’s involvement. (We mostly just make releases slowly and inadvertently introduce bugs of our own.) We have a bunch of fairly clear ideas around improving the platform to empower the Content team to speed up releases, and to eliminate the Engineering involvement. However, we’ve struggled to find time to implement them, or to validate that these ideas will work. If we don’t find a way to prioritize, and succeed at implementing, a project to reduce Engineering involvement in Content releases, we will struggle to support our goals to release more content and to develop more product functionality this year Our Infrastructure team has been able to plan and make these kinds of investments stick. However, when we attempt these projects within our Product Engineering teams, things don’t go that well. We are good at getting them onto the initial roadmap, but then they get deprioritized due to pressure to complete other projects. Engineering team is not very fungible due to its small size (20 engineers), and because we have many specializations within the team: iOS, Android, Backend, Frontend, Infrastructure, and QA. We would like to staff these kinds of projects onto the Infrastructure team, but in practice that team does not have the product development experience to implement theis kind of project. We’ve discussed spinning up a Platform team, or moving product engineers onto Infrastructure, but that would either (1) break our goal to maintain joint pairs between Product Managers and Engineering Managers, or (2) be indistinguishable from prioritizing within the existing team because it would still have the same Product Manager and Engineering Manager pair. Company planning is organic, occurring in many discussions and limited structured process. If we make a decision to invest in one project, it’s easy for that project to get deprioritized in a side discussion missing context on why the project is important. These reprioritization discussions happen both in executive forums and in team-specific forums. There’s imperfect awareness across these two sorts of forums. Explore Prioritization is a deep topic with a wide variety of popular solutions. For example, many software companies rely on “RICE” scoring, calculating priority as (Reach times Impact times Confidence) divided by Effort. At the other extreme are complex methodologies like [Scaled Agile Framework)(https://en.wikipedia.org/wiki/Scaled_agile_framework). In addition to generalized planning solutions, many companies carve out special mechanisms to solve for particular prioritization gaps. Google historically offered 20% time to allow individuals to work on experimental projects that didn’t align directly with top-down priorities. Stripe’s Foundation Engineering organization developed the concept of Foundational Initiatives to prioritize cross-pillar projects with long-term implications, which otherwise struggled to get prioritized within the team-led planning process. All these methods have clear examples of succeeding, and equally clear examples of struggling. Where these initiatives have succeeded, they had an engaged executive sponsoring the practice’s rollout, including triaging escalations when the rollout inconvenienced supporters of the prior method. Where they lacked a sponsor, or were misaligned with the company’s culture, these methods have consistently failed despite the fact that they’ve previously succeeded elsewhere.

2 days ago 7 votes
Personal tools

I used to make little applications just for myself. Sixteen years ago (oof) I wrote a habit tracking application, and a keylogger that let me keep track of when I was using a computer, and generate some pretty charts. I’ve taken a long break from those kinds of things. I love my hobbies, but they’ve drifted toward the non-technical, and the idea of keeping a server online for a fun project is unappealing (which is something that I hope Val Town, where I work, fixes). Some folks maintain whole ‘homelab’ setups and run Kubernetes in their basement. Not me, at least for now. But I have been tiptoeing back into some little custom tools that only I use, with a focus on just my own computing experience. Here’s a quick tour. Hammerspoon Hammerspoon is an extremely powerful scripting tool for macOS that lets you write custom keyboard shortcuts, UIs, and more with the very friendly little language Lua. Right now my Hammerspoon configuration is very simple, but I think I’ll use it for a lot more as time progresses. Here it is: hs.hotkey.bind({"cmd", "shift"}, "return", function() local frontmost = hs.application.frontmostApplication() if frontmost:name() == "Ghostty" then frontmost:hide() else hs.application.launchOrFocus("Ghostty") end end) Not much! But I recently switched to Ghostty as my terminal, and I heavily relied on iTerm2’s global show/hide shortcut. Ghostty doesn’t have an equivalent, and Mikael Henriksson suggested a script like this in GitHub discussions, so I ran with it. Hammerspoon can do practically anything, so it’ll probably be useful for other stuff too. SwiftBar I review a lot of PRs these days. I wanted an easy way to see how many were in my review queue and go to them quickly. So, this script runs with SwiftBar, which is a flexible way to put any script’s output into your menu bar. It uses the GitHub CLI to list the issues, and jq to massage that output into a friendly list of issues, which I can click on to go directly to the issue on GitHub. #!/bin/bash # <xbar.title>GitHub PR Reviews</xbar.title> # <xbar.version>v0.0</xbar.version> # <xbar.author>Tom MacWright</xbar.author> # <xbar.author.github>tmcw</xbar.author.github> # <xbar.desc>Displays PRs that you need to review</xbar.desc> # <xbar.image></xbar.image> # <xbar.dependencies>Bash GNU AWK</xbar.dependencies> # <xbar.abouturl></xbar.abouturl> DATA=$(gh search prs --state=open -R val-town/val.town --review-requested=@me --json url,title,number,author) echo "$(echo "$DATA" | jq 'length') PR" echo '---' echo "$DATA" | jq -c '.[]' | while IFS= read -r pr; do TITLE=$(echo "$pr" | jq -r '.title') AUTHOR=$(echo "$pr" | jq -r '.author.login') URL=$(echo "$pr" | jq -r '.url') echo "$TITLE ($AUTHOR) | href=$URL" done Tampermonkey Tampermonkey is essentially a twist on Greasemonkey: both let you run your own JavaScript on anybody’s webpage. Sidenote: Greasemonkey was created by Aaron Boodman, who went on to write Replicache, which I used in Placemark, and is now working on Zero, the successor to Replicache. Anyway, I have a few fancy credit cards which have ‘offers’ which only work if you ‘activate’ them. This is an annoying dark pattern! And there’s a solution to it - CardPointers - but I neither spend enough nor care enough about points hacking to justify the cost. Plus, I’d like to know what code is running on my bank website. So, Tampermonkey to the rescue! I wrote userscripts for Chase, American Express, and Citi. You can check them out on this Gist but I strongly recommend to read through all the code because of the afore-mentioned risks around running untrusted code on your bank account’s website! Obsidian Freeform This is a plugin for Obsidian, the notetaking tool that I use every day. Freeform is pretty cool, if I can say so myself (I wrote it), but could be much better. The development experience is lackluster because you can’t preview output at the same time as writing code: you have to toggle between the two states. I’ll fix that eventually, or perhaps Obsidian will add new API that makes it all work. I use Freeform for a lot of private health & financial data, almost always with an Observable Plot visualization as an eventual output. For example, when I was switching banks and one of the considerations was mortgage discounts in case I ever buy a house (ha 😢), it was fun to chart out the % discounts versus the required AUM. It’s been really nice to have this kind of visualization as ‘just another document’ in my notetaking app. Doesn’t need another server, and Obsidian is pretty secure and private.

2 days ago 7 votes
All conference talks should start with a small technical glitch that the speaker can easily solve

At a conference a while back, I noticed a couple of speakers get such a confidence boost after solving a small technical glitch. We should probably make that a part of every talk. Have the mic not connect automatically, or an almost-complete puzzle on the stage that the speaker can finish, or have someone forget their badge and the speaker return it to them. Maybe the next time I, or a consenting teammate, have to give a presentation I’ll try to engineer such a situation. All conference talks should start with a small technical glitch that the speaker can easily solve was originally published by Ognjen Regoje at Ognjen Regoje • ognjen.io on April 03, 2025.

3 days ago 5 votes