More from Paolo Amoroso's Journal
<![CDATA[Printing rich text to windows is one of the planned features of DandeGUI, the GUI library for Medley Interlisp I'm developing in Common Lisp. I finally got around to this and implemented the GUI:WITH-TEXT-STYLE macro which controls the attributes of text printed to a window, such as the font family and face. GUI:WITH-TEXT-STYLE establishes a context in which text printed to the stream associated with a TEdit window is rendered in the style specified by the arguments. The call to GUI:WITH-TEXT-STYLE here extends the square root table example by printing the heading in a 12-point bold sans serif font: (gui:with-output-to-window (stream :title "Table of square roots") (gui:with-text-style (stream :family :sans :size 12 :face :bold) (format stream "~&Number~40TSquare Root~2%")) (loop for n from 1 to 30 do (format stream "~&~4D~40T~8,4F~%" n (sqrt n)))) The code produces this window in which the styled column headings stand out: Medley Interlisp window of a square root table generated by the DandeGUI GUI library. The :FAMILY, :SIZE, and :FACE arguments determine the corresponding text attributes. :FAMILY may be a generic family such as :SERIF for an unspecified serif font; :SANS for a sans serif font; :FIX for a fixed width font; or a keyword denoting a specific family like :TIMESROMAN. At the heart of GUI:WITH-TEXT-STYLE is a pair of calls to the Interlisp function PRINTOUT that wrap the macro body, the first for setting the font of the stream to the specified style and the other for restoring the default: (DEFMACRO WITH-TEXT-STYLE ((STREAM &KEY FAMILY SIZE FACE) &BODY BODY) (ONCE-ONLY (STREAM) `(UNWIND-PROTECT (PROGN (IL:PRINTOUT ,STREAM IL:.FONT (TEXT-STYLE-TO-FD ,FAMILY ,SIZE ,FACE)) ,@BODY) (IL:PRINTOUT ,STREAM IL:.FONT DEFAULT-FONT)))) PRINTOUT is an Interlisp function for formatted output similar to Common Lisp's FORMAT but with additional font control via the .FONT directive. The symbols of PRINTOUT, i.e. its directives and arguments, are in the Interlisp package. In turn GUI:WITH-TEXT-STYLE calls GUI::TEXT-STYLE-TO-FD, an internal DandeGUI function which passes to .FONT a font descriptor matching the required text attributes. GUI::TEXT-STYLE-TO-FD calls IL:FONTCOPY to build a descriptor that merges the specified attributes with any unspecified ones copied from the default font. The font descriptor is an Interlisp data structure that represents a font on the Medley environment. #DandeGUI #CommonLisp #Interlisp #Lisp a href="https://remark.as/p/journal.paoloamoroso.com/changing-text-style-for-dandegui-window-output"Discuss.../a Email | Reply @amoroso@oldbytes.space !--emailsub--]]>
<![CDATA[I continued working on DandeGUI, a GUI library for Medley Interlisp I'm writing in Common Lisp. I added two new short public functions, GUI:CLEAR-WINDOW and GUI:PRINT-MESSAGE, and fixed a bug in some internal code. GUI:CLEAR-WINDOW deletes the text of the window associated with the Interlisp TEXTSTREAM passed as the argument: (DEFUN CLEAR-WINDOW (STREAM) "Delete all the text of the window associated with STREAM. Returns STREAM" (WITH-WRITE-ENABLED (STR STREAM) (IL:TEDIT.DELETE STR 1 (IL:TEDIT.NCHARS STR))) STREAM) It's little more than a call to the TEdit API function IL:TEDIT.DELETE for deleting text in the editor buffer, wrapped in the internal macro GUI::WITH-WRITE-ENABLED that establishes a context for write access to a window. I also wrote GUI:PRINT-MESSAGE. This function prints a message to the prompt area of the window associated with the TEXTSTREAM passed as an argument, optionally clearing the area prior to printing. The prompt area is a one-line Interlisp prompt window attached above the title bar of the TEdit window where the editor displays errors and status messages. (DEFUN PRINT-MESSAGE (STREAM MESSAGE &OPTIONAL DONT-CLEAR-P) "Print MESSAGE to the prompt area of the window associated with STREAM. If DONT-CLEAR-P is non NIL the area will be cleared first. Returns STREAM." (IL:TEDIT.PROMPTPRINT STREAM MESSAGE (NOT DONT-CLEAR-P)) STREAM) GUI:PRINT-MESSAGE just passes the appropriate arguments to the TEdit API function IL:TEDIT.PROMPTPRINT which does the actual printing. The documentation of both functions is in the API reference on the project repo. Testing DandeGUI revealed that sometimes text wasn't appended to the end but inserted at the beginning of windows. To address the issue I changed GUI::WITH-WRITE-ENABLED to ensure the file pointer of the stream is set to the end of the file (i.e -1) prior to passing control to output functions. The fix was to add a call to the Interlisp function IL:SETFILEPTR: (IL:SETFILEPTR ,STREAM -1) #DandeGUI #CommonLisp #Interlisp #Lisp a href="https://remark.as/p/journal.paoloamoroso.com/adding-window-clearing-and-message-printing-to-dandegui"Discuss.../a Email | Reply @amoroso@oldbytes.space !--emailsub--]]>
<![CDATA[I spoke too soon when I said I was enjoying the stability of Linux. I have been using Linux Mint Cinnamon on a System76 Merkaat PC with no major issues since July of 2024. But a few days ago a routine system update of Mint 22 dumped me to the text console. A fresh install of Mint 22.1, the latest release, brought the system back online. I had backups and the mishap luckily turned out as just an annoyance that consumed several hours of unplanned maintenance. It all started when the Mint Update Manager listed several packages for update, including the System76 driver and tools. Oddly, the Update Manager also marked for removal several packages including core ones such as Xorg, Celluloid, and more. The smooth running of Mint made my paranoid side fall asleep and I applied the recommend changes. At the next reboot the graphics session didn't start and landed me at the text console with no clue what happened. I don't use Timeshift for system snapshots as I prefer a fresh install and restore of data backups if the system breaks. Therefore, to fix such an issue apparently related to Mint 22 the obvious route was to install Mint 22.1. Besides, this was the right occasion to try the new release. On my Raspberry Pi 400 I ran dd to flash a bootable USB stick with Mint 22.1. I had no alternatives as GNOME Disks didn't work. The Merkaat failed to boot off the stick, possibly because I messed with the arguments of dd. I still had around a USB stick with Mint 22 and I used it to freshly install it on the Merkaat. Then I immediately ran the upgrade to Mint 22.1 which completed successfully unlike a prior upgrade attempt. Next, I tried to install the System76 driver with sudo apt install system76-driver but got a package not found error. At that point I had already added the System76 package repository to the APT sources and refreshing the Mint Update Manager yielded this error: Could not refresh the list of updates Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages Aside from the errors the system was up and running on the Merkaat, so with Nemo I reflashed the Mint 22.1 stick. This time the PC did boot off the stick and let me successfully install Mint 22.1. Restoring the data completed the system recovery. I left out the System76 driver as it's the primary suspect, possibly due to package conflicts. Mint detects and supports all hardware of the Merkaat anyway and it's only prudent to skip the package for the time being. Besides improvements under the hood, Mint 22.1 features a redesigned default Cinnamon theme. No major changes, I feel at home. The main takeaway of this adventure is that it's better to have a bootable USB stick ready with the latest Mint release, even if I don't plan to upgrade immediately. Another takeaway is the Pi 400 makes for a viable backup computer that can support my major tasks, should it take longer to recover the Merkaat. However, using the device for making bootable media is problematic as little flashing software is available and some is unreliable. Finally, over decades of Linux experience I honed my emergency installation skills so much I can now confidently address most broken system situations. #linux #pi400 a href="https://remark.as/p/journal.paoloamoroso.com/an-unplanned-upgrade-to-linux-mint-22-1-cinnamon"Discuss.../a Email | Reply @amoroso@fosstodon.org !--emailsub--]]>
<![CDATA[My journey to Lisp began in the early 1990s. Over three decades later, a few days ago I rediscovered the first Lisp environment I ever used back then which contributed to my love for the language. Here it is, PC Scheme running under DOSBox-X on my Linux PC: Screenshot of the PC Scheme Lisp development environment for MS-DOS by Texas Instruments running under DOSBox-X on Linux Mint Cinnamon. Using PC Scheme again brought back lots of great memories and made me reflect on what the environment taught me about Lisp and Lisp tooling. As a Computer Science student at the University of Milan, Italy, around 1990 I took an introductory computers and programming class taught by Prof. Stefano Cerri. The textbook was the first edition of Structure and Interpretation of Computer Programs (SICP) and Texas Instruments PC Scheme for MS-DOS the recommended PC implementation. I installed PC Scheme under DR-DOS on a 20 MHz 386 Olidata laptop with 2 MB RAM and a 40 MB hard disk drive. Prior to the class I had read about Lisp here and there but never played with the language. SICP and its use of Scheme as an elegant executable formalism instantly fascinated me. It was Lisp love at first sight. The class covered the first three chapters of the book but I later read the rest on my own. I did lots of exercises using PC Scheme to write and run them. Soon I became one with PC Scheme. The environment enabled a tight development loop thanks to its Emacs-like EDWIN editor that was well integrated with the system. The Lisp awareness of EDWIN blew my mind as it was the first such tool I encountered. The editor auto-indented and reformatted code, matched parentheses, and supported evaluating expressions and code blocks. Typing a closing parenthesis made EDWIN blink the corresponding opening one and briefly show a snippet of the beginning of the matched expression. Paying attention to the matching and the snippets made me familiar with the shape and structure of Lisp code, giving a visual feel of whether code looks syntactically right or off. Within hours of starting to use EDWIN the parentheses ceased to be a concern and disappeared from my conscious attention. Handling parentheses came natural. I actually ended up loving parentheses and the aesthetics of classic Lisp. Parenthesis matching suggested me a technique for writing syntactically correct Lisp code with pen and paper. When writing a closing parenthesis with the right hand I rested the left hand on the paper with the index finger pointed at the corresponding opening parenthesis, moving the hands in sync to match the current code. This way it was fast and easy to write moderately complex code. PC Scheme spoiled me and set the baseline of what to expect in a Lisp environment. After the class I moved to PCS/Geneva, a more advanced PC Scheme fork developed at the University of Geneva. Over the following decades I encountered and learned Common Lisp, Emacs, Lisp, and Interlisp. These experiences cemented my passion for Lisp. In the mid-1990s Texas Instruments released the executable and sources of PC Scheme. I didn't know it at the time, or if I noticed I long forgot. Until a few days ago, when nostalgia came knocking and I rediscovered the PC Scheme release. I installed PC Scheme under the DOSBox-X MS-DOS emulator on my Linux Mint Cinnamon PC. It runs well and I enjoy going through the system to rediscover what it can do. Playing with PC Scheme after decades of Lisp experience and hindsight on computing evolution shines new light on the environment. I didn't fully realize at the time but the product packed an amazing value for the price. It cost $99 in the US and I paid it about 150,000 Lira in Italy. Costing as much as two or three texbooks, the software was affordable even to students and hobbyists. PC Scheme is a rich, fast, and surprisingly capable environment with features such as a Lisp-aware editor, a good compiler, a structure editor and other tools, many Scheme extensions such as engines and OOP, text windows, graphics, and a lot more. The product came with an extensive manual, a thick book in a massive 3-ring binder I read cover to cover more than once. A paper on the implementation of PC Scheme sheds light on how good the system is given the platform constraints. Using PC Scheme now lets me put into focus what it taught me about Lisp and Lisp systems: the convenience and productivity of Lisp-aware editors; interactive development and exploratory programming; and a rich Lisp environment with a vast toolbox of libraries and facilities — this is your grandfather's batteries included language. Three decades after PC Scheme a similar combination of features, facilities, and classic aesthetics drew me to Medley Interlisp, my current daily driver for Lisp development. #Lisp #MSDOS #retrocomputing a href="https://remark.as/p/journal.paoloamoroso.com/rediscovering-the-origins-of-my-lisp-journey"Discuss.../a Email | Reply @amoroso@fosstodon.org !--emailsub--]]>
More in programming
Explore Remix's new React Server Components (RSC) preview in react-router! Learn usage, different approaches, and trade-offs.
Have you thought about doing the opposite of whatever you're doing or considering? It's a really helpful way to test your assumptions and your values. What does the opposite look like, how would it work? It's so easy to get stuck in a groove of what works, what you believe to be right. But helpful assumptions have a half-life, just like facts. And it's ever so easy to miss the shift when circumstances change, if you're not regularly stress-testing your core beliefs. That doesn't mean you're just a flag in the wind, blowing whichever way. But it does mean having enough intellectual humility and creative flexibility to consider that what you believe to be true about your business, about your team, about your technology might not be so. We did this a while back with full-time managers. We'd been working for nearly two decades without any, but exactly because it'd been so long, we were drawn to try the opposite, just to see what we might have missed. So we did. Hired a few full-time managers to help us test that assumption for a few years. In the end, we decided that our managers-of-one culture worked better, but it wasn't a given at the outset. To try the opposite, you really have to believe that you might have been wrong. Because you're wrong about something. I guarantee it. We all are.
When interviewing with a Japanese company, you’ll naturally want to know: “Is this a good place to work?” And while Glassdoor is the standard in English-speaking countries for employees leaving online reviews, the site is only rarely used in Japan, and then primarily by non-Japanese workers. Many countries have a culture that endorses directly reviewing employers in an open, public environment—Japan does not. However, there are still sites where you can find important information on your potential employer. What to watch out for In particular, you want to avoid signing on with a company that engages in exploitative practices—or as they’re known in Japan, a “black company” (ブラック企業, burakku kigyou). The Ministry of Health, Labor, and Welfare has a FAQ describing what defines these companies: Imposing extremely long working hours with high quotas. Recognition of workers’ rights is low throughout the company; unpaid overtime and/or workplace bullying (パワハラ, pawahara) are common. The company assigns shifts to workers without consent. The company discriminates among workers in the above circumstances. In a 2023 survey, those who had worked for such toxic companies listed high turnover rates as the most common sign that something was wrong, followed by long working hours and unpaid overtime. As you examine online review sites and other sources, look for clues such as: Turnover rate: how long do employees typically stay? Internal promotion: can you see employees rising in the ranks? Upper management: are there any non-Japanese employees in management positions? Recent company announcements: do they often make sudden pivots in their business policies? If you discover, for example, that the company can’t retain employees, shows no history of internal promotions, and has just issued a return-to-office order out of the blue, it’s safe to assume you don’t want to work there. OpenWork OpenWork, also known as Vorkers, hosts over 19 million company reviews. The reviews are represented in a radar chart for easy visual reference, and are also broken down into different categories, such as work-life balance, the ease of working for women, and reasons for considering quitting. In addition, applicants can post questions for employees to answer. If you don’t speak Japanese, the site is still readable with Google Translate. You’ll need to make a free account to see all of the information, but much of it is accessible even without an account. Other Japanese sites JobTalk and Engage Hyouban are other Japanese-language review sites. JobTalk contains 4.4 million reviews of around 230,000 different companies, and Engage Hyouban boasts 30 million reviews for 220,000 companies. Neither of these sites offer as much information on tech companies in Japan as OpenWork does. If you’re applying to a large company such as Rakuten, you may find some additional reviews there, but many of TokyoDev’s clients are smaller companies that aren’t listed at all. Google Maps Reviews An unusual but occasionally helpful place to find company reviews is on Google Maps. If you search for a business’s main corporate office location—usually in Tokyo—you will sometimes find reviews written by current or former employees. Whether these reviews are high-quality or trustworthy is another matter. Rakuten, for example, has reviews with a range of opinions. Cybozu, by contrast, mostly has reviews from those who would like to work for the company but currently don’t. Still, the reviews of its corporate office are consistently positive, so you can at least get an impression of the physical environment. LinkedIn “If you’re worried that a company might be a poor place to work, try contacting current or past employees via LinkedIn,” suggested Paul McMahon, founder of TokyoDev. “This probably works best if you’re late in the hiring process.” You can send a connect request saying, ‘I’ve received an offer from company X, and want to confirm what it’s really like to work there as an engineer. Mind if I ask you a couple of questions?’ Whether or not they respond, you can still glean good information from the profiles of past and current employees. Check to see if developers tend to leave the company quickly, for example, or how long the average employee goes before being promoted. You should keep in mind though that LinkedIn is not popular in Japan, for several good reasons. If you are applying to a primarily Japanese company, many of your future coworkers won’t be active there, which means you still may not be getting a complete picture. TokyoDev In 2020, TokyoDev began interviewing developers in order to provide a more complete, boots-on-the-ground picture of daily life at specific companies. Our Developer Stories feature interviews with developers at top Japanese tech companies, who share details about both their specific jobs and the general work environment. The goal is to give applicants a good sense of how a company operates on a day-to-day basis, from the perspective of those on the inside. So far, TokyoDev has interviewed developers from Mercari, PayPay, Givery, HENNGE, KOMOJU, and more. In addition, TokyoDev’s job board is a selective one, listing only companies that we feel good about sending applicants to. In the rare event that employees later reach out with poor reviews of a business, if those reports can be confirmed, then TokyoDev will end its relationship with that company. Conclusion In short, the answer to the question “Is there a Japanese equivalent to Glassdoor?” is, “Not really.” However, by combining some of the alternatives—OpenWork, LinkedIn, TokyoDev, and perhaps even Google Maps—you can gather enough information to decide whether you want to work with a particular Japanese company. You could also ask fellow developers in our Discord. Curious about working in Japan in general? See our articles on the subject, as well as moving to Japan, living in Japan, starting a business in Japan, and more.
I’ve spent so much time, had so many headaches, and encountered so much complexity from what, in my estimation, boils down to this: trying to get something to work on multiple computers. It might be time to just go back to having one computer — a personal laptop — do everything. No more commit, push, and let the cloud build and deploy. No more making it possible to do a task on my phone and tablet too. No more striving to make it possible to do anything from anywhere. Instead, I should accept the constraint of doing specific kinds of tasks when I’m at my laptop. No laptop? Don’t do it. Save it for later. Is it really that important? I think I’d save myself a lot of time and headache with that constraint. No more continuous over-investment of my time in making it possible to do some particular task across multiple computers. It’s a subtle, but fundamental, shift in thinking about my approach to computing tasks. Today, my default posture is to defer control of tasks to cloud computing platforms. Let them do the work, and I can access and monitor that work from any device. Like, for example, publishing a version of my website: git commit, push, and let the cloud build and deploy it. But beware, there be possible dragons! The build fails. It’s not clear why, but it “works on my machine”. Something is different between my computer and the computer in the cloud. Now I’m troubleshooting an issue unrelated to my website itself. I’m troubleshooting an issue with the build and deployment of my website across multiple computers. It’s easy to say: build works on my machine, deploy it! It’s deceivingly time-consuming to take that one more step and say: let another computer build it and deploy it. So rather than taking the default posture of “cloud-first”, i.e. push to the cloud and let it handle everything, I’d rather take a “local-first” approach where I choose one primary device to do tasks on, and ensure I can do them from there. Everything else beyond that, i.e. getting it to work on multiple computers, is a “progressive enhancement” in my workflow. I can invest the time, if I want to, but I don’t have to. This stands in contrast to where I am today which is if a build fails in the cloud, I have to invest the time because that’s how I’ve setup my workflow. I can only deploy via the cloud. So I have to figure out how to get the cloud’s computer to build my site, even when my laptop is doing it just fine. It’s hard to make things work identically across multiple computers. I get it, that’s a program not software. And that’s the work. But sometimes a program is just fine. Wisdom is knowing the difference. Email · Mastodon · Bluesky
For the past week, I've been working off the Minisforum UM870. A tiny mini PC with an 8-core/16-thread AMD 8745H CPU, which retails for $343 (or €379) as a bare-bone unit, and stays below $550, even after adding 48GB of RAM and 1TB of storage. I'm shocked to report that I really don't need more than this! I mean, I knew that Apple's Mac Mini, which is equally petite to the Minisforum, had plenty of power for macOS. But somehow I thought Apple had some special sauce that made this possible, and that PCs were forever condemned to be bigger, louder, and slower. How 2020 of me. The UM870 is a little beast. It runs our full HEY test suite in just 2m28s. In comparison, it takes a 14-core M4 Pro 2m49s, and such an Apple costs $2,199, once you've given it 48GB of RAM and 1TB of storage. Now, that M4 Mac Mini can probably do things with, say, 8K video editing that the UM870 can't touch. But on the other hand, the UM870 can play the latest video games. Everything from Fortnite to Cyberpunk 2077 to Forza Horizon. It won't trouble a modern, dedicated Nvidia card for max FPS, but it's perfectly playable at 1080p at medium settings in a ton of games. In raw CPU power, the AMD 8745H will match a regular M4 in multi-core. They both clock in right around 13,000 points on Geekbench 6. Though the M4 is a fair bit quicker in single-core. The AMD is also far behind an M4 Pro in raw multi-core power (13K vs 22K), but at less than a quarter the cost, it's hard to complain. But as with the example of video games, it's often deceiving just to compare the Geekbench numbers, because it all depends on what you're doing. If you're really into video games, it's no use to have extra grunt, if your favorite games won't run. The same is true if you're a developer working with Docker containers and a Linux toolchain. As quoted above, the UM870 handily defeats the M4 Pro in our all-cores-buzzing HEY test suite. That's partly because we run databases and accessories, like MySQL, Redis, and ElasticSearch, in Docker containers. Even though we run the Ruby code natively on both platforms, the Docker dependencies put the Mac further behind than it otherwise would have been, because Linux runs Docker natively, and the Mac has to deal with the file-system tax and other drawbacks. The irony is that it was partly Apple's volume with and investment in TSMC that got us these incredible AMD chips, as they're riding the same improvements in TSMC manufacturing prowess as Apple's M chips. The Zen 4 cores in the 8745H are all forged on the same 5nm process as the M2, so it's no surprise that the AMD cores are dead-on-the-money for Apple's in Geekbench single-core performance. And Zen 4 is even the last generation! The insane new (and insanely named) AMD Ryzen AI Max 395+ chip that's used in the upcoming Framework Desktop runs on Zen 5 cores. And with 16 of those, the 395+ is faster in Geekbench multi-core than an M4 Pro, and only ever so slightly behind the M4 Max. On my HEY test suite, it completes the run in an insane 1m21s — more than twice as fast as the 14-core M4 Pro! But I digress. The 395+ chip isn't cheap, even if it's still a great deal. The Framework Desktop with 64GB/1TB, which is twice as fast as the M4 Pro with our HEY tests, is $1,744. That's still less than the $2,199 Mac Mini, which only has 48GB of RAM. But obviously way more than a $550 Minisforum! And while it's quite small, it's not tiny, like the UM870. Regardless, this is what I love about technology. I love when our assumptions are tested: just how small and cheap can an awesome developer machine become? I love that open-source Linux is able to run laps around Apple in the workloads that many developers need (like working with Docker containers). I love that this tiny little silent $550 mini PC on my desk is capable of putting out computing power that only a decade ago would have been reserved to loud, honking metal in a data center. Mini PCs have gotten really good. AMD is on a roll. Linux is a blast. These are my conclusions. Check out the Minisforum UM870 or the Beelink SER8. Anything with an AMD 7745H and up to an 8945HS should be a great deal. If you want to splurge (yet still get a bargain compared to the macs), you could have a look at the new HX370 in the Beelink SER9 or Minisforum X1, but I'd save my money, buy a Lofree Flow84 keyboard to go with the new mini rig, and put the rest of the money towards a KEF LSX II savings fund!