Full Width [alt+shift+f] FOCUS MODE Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
40
Traditionally found in printed media, drop caps are created by emphasizing the size, color, weight, or style of the first letter in the first sentence of a paragraph. We can easily reproduce this effect on webpages by using the :first-letter pseudo element. Writing the styles # Let's start by creating a class called drop-cap and adding a bit of style to it: .drop-cap:first-letter { float: left; font-size: 4em; line-height: 1; margin: .125em .25em; } As you can see, the size of the first letter will be significantly larger then the rest of the text. A typical drop cap will line up with the top of the first line of text and the left margin of the paragraph. Horizonal alignment occurs naturally, but we need to account for vertical alignment. By default, it appears slightly higher than we want it to, so the margin-top attribute offsets it enough to get it in line. Depending on font size and unit, this number will vary. You'll also notice that the first-letter is floated. This...
over a year ago

Comments

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from A Beautiful Site

Revisiting FOUCE

It's been awhile since I wrote about FOUCE and I've since come up with an improved solution that I think is worth a post. This approach is similar to hiding the page content and then fading it in, but I've noticed it's far less distracting without the fade. It also adds a two second timeout to prevent network issues or latency from rendering an "empty" page. First, we'll add a class called reduce-fouce to the <html> element. <html class="reduce-fouce"> ... </html> Then we'll add this rule to the CSS. <style> html.reduce-fouce { opacity: 0; } </style> Finally, we'll wait until all the custom elements have loaded or two seconds have elapsed, whichever comes first, and we'll remove the class causing the content to show immediately. <script type="module"> await Promise.race([ // Load all custom elements Promise.allSettled([ customElements.whenDefined('my-button'), customElements.whenDefined('my-card'), customElements.whenDefined('my-rating') // ... ]), // Resolve after two seconds new Promise(resolve => setTimeout(resolve, 2000)) ]); // Remove the class, showing the page content document.documentElement.classList.remove('reduce-fouce'); </script> This approach seems to work especially well and won't end up "stranding" the user if network issues occur.

8 months ago 93 votes
If Edgar Allan Poe was into Design Systems

Once upon a midnight dreary, while I pondered, weak and weary, While I nodded, nearly napping, suddenly there came a tapping, "'Tis a design system," I muttered, "bringing order to the core— Ah, distinctly I remember, every button, every splendor, Each component, standardized, like a raven's watchful eyes, Unified in system's might, like patterns we restore— And each separate style injection, linked with careful introspection, 'Tis a design system, nothing more.

9 months ago 99 votes
Web Components Are Not the Future — They’re the Present

It’s disappointing that some of the most outspoken individuals against Web Components are framework maintainers. These individuals are, after all, in some of the best positions to provide valuable feedback. They have a lot of great ideas! Alas, there’s little incentive for them because standards evolve independently and don’t necessarily align with framework opinions. How could they? Opinions are one of the things that make frameworks unique. And therein lies the problem. If you’re convinced that your way is the best and only way, it’s natural to feel disenchanted when a decision is made that you don’t fully agree with. This is my open response to Ryan Carniato’s post from yesterday called “Web Components Are Not the Future.” WTF is a component anyway? # The word component is a loaded term, but I like to think of it in relation to interoperability. If I write a component in Framework A, I would like to be able to use it in Framework B, C, and D without having to rewrite it or include its entire framework. I don’t think many will disagree with that objective. We’re not there yet, but the road has been paved and instead of learning to drive on it, frameworks are building…different roads. Ryan states: If the sheer number of JavaScript frameworks is any indicator we are nowhere near reaching a consensus on how one should author components on the web. And even if we were a bit closer today we were nowhere near there a decade ago. The thing is, we don’t need to agree on how to write components, we just need to agree on the underlying implementation, then you can use classes, hooks, or whatever flavor you want to create them. Turns out, we have a very well-known, ubiquitous technology that we’ve chosen to do this with: HTML. But it also can have a negative effect. If too many assumptions are made it becomes harder to explore alternative space because everything gravitates around the establishment. What is more established than a web standard that can never change? If the concern is premature standardization, well, it’s a bit late for that. So let’s figure out how to get from where we are now to where we want to be. The solution isn’t to start over at the specification level, it’s to rethink how front end frameworks engage with current and emerging standards and work to improve them. Respectfully, it’s time to stop complaining, move on, and fix the things folks perceive as suboptimal. The definition of component # That said, we also need to realize that Web Components aren’t a 1:1 replacement for framework components. They’re tangentially related things, and I think a lot of confusion stems from this. We should really fix the definition of component. So the fundamental problem with Web Components is that they are built on Custom Elements. Elements !== Components. More specifically, Elements are a subset of Components. One could argue that every Element could be a Component but not all Components are Elements. To be fair, I’ve never really liked the term “Web Components” because it competes with the concept of framework components, but that’s what caught on and that's what most people are familiar with these days. Alas, there is a very important distinction here. Sure, a button and a text field can be components, but there are other types. For example, many frameworks support a concept of renderless components that exist in your code, but not in the final HTML. You can’t do that with Web Components, because every custom element results in an actual DOM element. (FWIW I don’t think this is a bad thing — but I digress…) As to why Web components don’t do all the things framework components do, that’s because they’re a lower level implementation of an interoperable element. They’re not trying to do everything framework components do. That’s what frameworks are for. It’s ok to be shiny # In fact, this is where frameworks excel. They let you go above and beyond what the platform can do on its own. I fully support this trial-and-error way of doing things. After all, it’s fun to explore new ideas and live on the bleeding edge. We got a lot of cool stuff from doing that. We got document.querySelector() from jQuery. CSS Custom Properties were inspired by Sass. Tagged template literals were inspired by JSX. Soon we’re getting signals from Preact. And from all the component-based frameworks that came before them, we got Web Components: custom HTML elements that can be authored in many different ways (because we know people like choices) and are fully interoperable (if frameworks and metaframeworks would continue to move towards the standard instead of protecting their own). Frameworks are a testbed for new ideas that may or may not work out. We all need to be OK with that. Even framework authors. Especially framework authors. More importantly, we all need to stop being salty when our way isn’t what makes it into the browser. There will always be a better way to do something, but none of us have the foresight to know what a perfect solution looks like right now. Hindsight is 20/20. As humans, we’re constantly striving to make things better. We’re really good at it, by the way. But we must have the discipline to reach various checkpoints to pause, reflect, and gather feedback before continuing. Even the cheapest cars on the road today will outperform the Model T in every way. I’m sure Ford could have made the original Model T way better if they had spent another decade working on it, but do you know made the next version even better than 10 more years? The feedback they got from actual users who bought them, sat in them, and drove them around on actual roads. Web Standards offer a promise of stability and we need to move forward to improve them together. Using one’s influence to rally users against the very platform you’ve built your success on is damaging to both the platform and the community. We need these incredible minds to be less divisive and more collaborative. The right direction # Imagine if we applied the same arguments against HTML early on. What if we never standardized it at all? Would the Web be a better place if every site required a specific browser? (Narrator: it wasn't.) Would it be better if every site was Flash or a Java applet? (Remember Silverlight? lol) Sure, there are often better alternatives for every use case, but we have to pick something that works for the majority, then we can iterate on it. Web Components are a huge step in the direction of standardization and we should all be excited about that. But the Web Component implementation isn’t compatible with existing frameworks, and therein lies an existential problem. Web Components are a threat to the peaceful, proprietary way of life for frameworks that have amassed millions of users — the majority of web developers. Because opinions vary so wildly, when a new standard emerges frameworks can’t often adapt to them without breaking changes. And breaking changes can be detrimental to a user base. Have you spotted the issue? You can’t possibly champion Web Standards when you’ve built a non-standard thing that will break if you align with the emerging standard. It’s easier to oppose the threat than to adapt to it. And of course Web Components don’t do everything a framework does. How can the platform possibly add all the features every framework added last week? That would be absolutely reckless. And no, the platform doesn’t move as fast as your framework and that’s sometimes painful. But it’s by design. This process is what gives us APIs that continue to work for decades. As users, we need to get over this hurdle and start thinking about how frameworks can adapt to current standards and how to evolve them as new ones emerge. Let’s identify shortcomings in the spec and work together to improve the ecosystem instead of arguing about who’s shit smells worse. Reinventing the wheel isn’t the answer. Lock-in isn’t the answer. This is why I believe that next generation of frameworks will converge on custom elements as an interoperable component model, enhance that model by sprinkling in awesome features of their own, and focus more on flavors (class-based, functional, signals, etc.) and higher level functionality. As for today's frameworks? How they adapt will determine how relevant they remain. Living dangerously # Ryan concludes: So in a sense there are nothing wrong with Web Components as they are only able to be what they are. It's the promise that they are something that they aren't which is so dangerous. The way their existence warps everything around them that puts the whole web at risk. It's a price everyone has to pay. So Web Components aren’t the specific vision you had for components. That's fine. But that's how it is. They're not Solid components. They’re not React components. They’re not Svelte components. They’re not Vue components. They’re standards-based Web Components that work in all of the above. And they could work even better in all of the above if all of the above were interested in advancing the platform instead of locking users in. I’m not a conspiracy theorist, but I find interesting the number of people who are and have been sponsored and/or hired by for-profit companies whose platforms rely heavily on said frameworks. Do you think it’s in their best interest to follow Web Standards if that means making their service less relevant and less lucrative? Of course not. If you’ve built an empire on top of something, there’s absolutely zero incentive to tear it down for the betterment of humanity. That’s not how capitalism works. It’s far more profitable to lock users in and keep them paying. But you know what…? Web Standards don't give a fuck about monetization. Longevity supersedes ingenuity # The last thing I’d like to talk about is this line here. Web Components possibly pose the biggest risk to the future of the web that I can see. Of course, this is from the perspective of a framework author, not from the people actually shipping and maintaining software built using these frameworks. And the people actually shipping software are the majority, but that’s not prestigious so they rarely get the high follower counts. The people actually shipping software are tired of framework churn. They're tired of shit they wrote last month being outdated already. They want stability. They want to know that the stuff they build today will work tomorrow. As history has proven, no framework can promise that. You know what framework I want to use? I want a framework that aligns with the platform, not one that replaces it. I want a framework that values incremental innovation over user lock-in. I want a framework that says it's OK to break things if it means making the Web a better place for everyone. Yes, that comes at a cost, but almost every good investment does, and I would argue that cost will be less expensive than learning a new framework and rebuilding buttons for the umpteenth time. The Web platform may not be perfect, but it continuously gets better. I don’t think frameworks are bad but, as a community, we need to recognize that a fundamental piece of the platform has changed and it's time to embrace the interoperable component model that Web Component APIs have given us…even if that means breaking things to get there. The component war is over.

11 months ago 100 votes
Component Machines

Components are like little machines. You build them once. Use them whenever you need them. Every now and then you open them up to oil them or replace a part, then you send them back to work. And work, they do. Little component machines just chugging along so you never have to write them from scratch ever again. Adapted from this tweet.

a year ago 89 votes
Styling Custom Elements Without Reflecting Attributes

I've been struggling with the idea of reflecting attributes in custom elements and when it's appropriate. I think I've identified a gap in the platform, but I'm not sure exactly how we should fill it. I'll explain with an example. Let's say I want to make a simple badge component with primary, secondary, and tertiary variants. <my-badge variant="primary">foo</my-badge> <my-badge variant="secondary">bar</my-badge> <my-badge variant="tertiary">baz</my-badge> This is a simple component, but one that demonstrates the problem well. I want to style the badge based on the variant property, but sprouting attributes (which occurs as a result of reflecting a property back to an attribute) is largely considered a bad practice. A lot of web component libraries do it out of necessary to facilitate styling — including Shoelace — but is there a better way? The problem # I need to style the badge without relying on reflected attributes. This means I can't use :host([variant="..."]) because the attribute may or may not be set by the user. For example, if the component is rendered in a framework that sets properties instead of attributes, or if the property is set or changed programmatically, the attribute will be out of sync and my styles will be broken. So how can I style the badge based its variants without reflection? Let's assume we have the following internals, which is all we really need for the badge. <my-badge> #shadowRoot <slot></slot> </my-badge> What can we do about it? # I can't add classes to the slot, because :host(:has(.slot-class)) won't match. I can't set a data attribute on the host element, because that's the same as reflection and might cause issues with SSR and DOM morphing libraries. I could add a wrapper element around the slot and apply classes to it, but I'd prefer not to bloat the internals with additional elements. With a wrapper, users would have to use ::part(wrapper) to target it. Without the wrapper, they can set background, border, and other CSS properties directly on the host element which is more desirable. I could add custom states for each variant, but this gets messy for non-Boolean values and feels like an abuse of the API. Filling the gap # I'm not sure what the best solution is or could be, but one thing that comes to mind is a way to provide some kind of cross-root version of :has that works with :host. Something akin to: :host(:has-in-shadow-root(.some-selector)) { /* maybe one day… */ } If you have any thoughts on this one, hit me up on Twitter.

a year ago 87 votes

More in programming

If Apple cared about privacy

Defaults matter

20 hours ago 5 votes
first-class merges and cover letters

Although it looks really good, I have not yet tried the Jujutsu (jj) version control system, mainly because it’s not yet clearly superior to Magit. But I have been following jj discussions with great interest. One of the things that jj has not yet tackled is how to do better than git refs / branches / tags. As I underestand it, jj currently has something like Mercurial bookmarks, which are more like raw git ref plumbing than a high-level porcelain feature. In particular, jj lacks signed or annotated tags, and it doesn’t have branch names that always automatically refer to the tip. This is clearly a temporary state of affairs because jj is still incomplete and under development and these gaps are going to be filled. But the discussions have led me to think about how git’s branches are unsatisfactory, and what could be done to improve them. branch merge rebase squash fork cover letters previous branch workflow questions branch One of the huge improvements in git compared to Subversion was git’s support for merges. Subversion proudly advertised its support for lightweight branches, but a branch is not very useful if you can’t merge it: an un-mergeable branch is not a tool you can use to help with work-in-progress development. The point of this anecdote is to illustrate that rather than trying to make branches better, we should try to make merges better and branches will get better as a consequence. Let’s consider a few common workflows and how git makes them all unsatisfactory in various ways. Skip to cover letters and previous branch below where I eventually get to the point. merge A basic merge workflow is, create a feature branch hack, hack, review, hack, approve merge back to the trunk The main problem is when it comes to the merge, there may be conflicts due to concurrent work on the trunk. Git encourages you to resolve conflicts while creating the merge commit, which tends to bypass the normal review process. Git also gives you an ugly useless canned commit message for merges, that hides what you did to resolve the conflicts. If the feature branch is a linear record of the work then it can be cluttered with commits to address comments from reviewers and to fix mistakes. Some people like an accurate record of the history, but others prefer the repository to contain clean logical changes that will make sense in years to come, keeping the clutter in the code review system. rebase A rebase-oriented workflow deals with the problems of the merge workflow but introduces new problems. Primarily, rebasing is intended to produce a tidy logical commit history. And when a feature branch is rebased onto the trunk before it is merged, a simple fast-forward check makes it trivial to verify that the merge will be clean (whether it uses separate merge commit or directly fast-forwards the trunk). However, it’s hard to compare the state of the feature branch before and after the rebase. The current and previous tips of the branch (amongst other clutter) are recorded in the reflog of the person who did the rebase, but they can’t share their reflog. A force-push erases the previous branch from the server. Git forges sometimes make it possible to compare a branch before and after a rebase, but it’s usually very inconvenient, which makes it hard to see if review comments have been addressed. And a reviewer can’t fetch past versions of the branch from the server to review them locally. You can mitigate these problems by adding commits in --autosquash format, and delay rebasing until just before merge. However that reintroduces the problem of merge conflicts: if the autosquash doesn’t apply cleanly the branch should have another round of review to make sure the conflicts were resolved OK. squash When the trunk consists of a sequence of merge commits, the --first-parent log is very uninformative. A common way to make the history of the trunk more informative, and deal with the problems of cluttered feature branches and poor rebase support, is to squash the feature branch into a single commit on the trunk instead of mergeing. This encourages merge requests to be roughly the size of one commit, which is arguably a good thing. However, it can be uncomfortably confining for larger features, or cause extra busy-work co-ordinating changes across multiple merge requests. And squashed feature branches have the same merge conflict problem as rebase --autosquash. fork Feature branches can’t always be short-lived. In the past I have maintained local hacks that were used in production but were not (not yet?) suitable to submit upstream. I have tried keeping a stack of these local patches on a git branch that gets rebased onto each upstream release. With this setup the problem of reviewing successive versions of a merge request becomes the bigger problem of keeping track of how the stack of patches evolved over longer periods of time. cover letters Cover letters are common in the email patch workflow that predates git, and they are supported by git format-patch. Github and other forges have a webby version of the cover letter: the message that starts off a pull request or merge request. In git, cover letters are second-class citizens: they aren’t stored in the repository. But many of the problems I outlined above have neat solutions if cover letters become first-class citizens, with a Jujutsu twist. A first-class cover letter starts off as a prototype for a merge request, and becomes the eventual merge commit. Instead of unhelpful auto-generated merge commits, you get helpful and informative messages. No extra work is needed since we’re already writing cover letters. Good merge commit messages make good --first-parent logs. The cover letter subject line works as a branch name. No more need to invent filename-compatible branch names! Jujutsu doesn’t make you name branches, giving them random names instead. It shows the subject line of the topmost commit as a reminder of what the branch is for. If there’s an explicit cover letter the subject line will be a better summary of the branch as a whole. I often find the last commit on a branch is some post-feature cleanup, and that kind of commit has a subject line that is never a good summary of its feature branch. As a prototype for the merge commit, the cover letter can contain the resolution of all the merge conflicts in a way that can be shared and reviewed. In Jujutsu, where conflicts are first class, the cover letter commit can contain unresolved conflicts: you don’t have to clean them up when creating the merge, you can leave that job until later. If you can share a prototype of your merge commit, then it becomes possible for your collaborators to review any merge conflicts and how you resolved them. To distinguish a cover letter from a merge commit object, a cover letter object has a “target” header which is a special kind of parent header. A cover letter also has a normal parent commit header that refers to earlier commits in the feature branch. The target is what will become the first parent of the eventual merge commit. previous branch The other ingredient is to add a “previous branch” header, another special kind of parent commit header. The previous branch header refers to an older version of the cover letter and, transitively, an older version of the whole feature branch. Typically the previous branch header will match the last shared version of the branch, i.e. the commit hash of the server’s copy of the feature branch. The previous branch header isn’t changed during normal work on the feature branch. As the branch is revised and rebased, the commit hash of the cover letter will change fairly frequently. These changes are recorded in git’s reflog or jj’s oplog, but not in the “previous branch” chain. You can use the previous branch chain to examine diffs between versions of the feature branch as a whole. If commits have Gerrit-style or jj-style change-IDs then it’s fairly easy to find and compare previous versions of an individual commit. The previous branch header supports interdiff code review, or allows you to retain past iterations of a patch series. workflow Here are some sketchy notes on how these features might work in practice. One way to use cover letters is jj-style, where it’s convenient to edit commits that aren’t at the tip of a branch, and easy to reshuffle commits so that a branch has a deliberate narrative. When you create a new feature branch, it starts off as an empty cover letter with both target and parent pointing at the same commit. Alternatively, you might start a branch ad hoc, and later cap it with a cover letter. If this is a small change and rebase + fast-forward is allowed, you can edit the “cover letter” to contain the whole change. Otherwise, you can hack on the branch any which way. Shuffle the commits that should be part of the merge request so that they occur before the cover letter, and edit the cover letter to summarize the preceding commits. When you first push the branch, there’s (still) no need to give it a name: the server can see that this is (probably) going to be a new merge request because the top commit has a target branch and its change-ID doesn’t match an existing merge request. Also when you push, your client automatically creates a new instance of your cover letter, adding a “previous branch” header to indicate that the old version was shared. The commits on the branch that were pushed are now immutable; rebases and edits affect the new version of the branch. During review there will typically be multiple iterations of the branch to address feedback. The chain of previous branch headers allows reviewers to see how commits were changed to address feedback, interdiff style. The branch can be merged when the target header matches the current trunk and there are no conflicts left to resolve. When the time comes to merge the branch, there are several options: For a merge workflow, the cover letter is used to make a new commit on the trunk, changing the target header into the first parent commit, and dropping the previous branch header. Or, if you like to preserve more history, the previous branch chain can be retained. Or you can drop the cover letter and fast foward the branch on to the trunk. Or you can squash the branch on to the trunk, using the cover letter as the commit message. questions This is a fairly rough idea: I’m sure that some of the details won’t work in practice without a lot of careful work on compatibility and deployability. Do the new commit headers (“target” and “previous branch”) need to be headers? What are the compatibility issues with adding new headers that refer to other commits? How would a server handle a push of an unnamed branch? How could someone else pull a copy of it? How feasible is it to use cover letter subject lines instead of branch names? The previous branch header is doing a similar job to a remote tracking branch. Is there an opportunity to simplify how we keep a local cache of the server state? Despite all that, I think something along these lines could make branches / reviews / reworks / merges less awkward. How you merge should me a matter of your project’s preferred style, without interference from technical limitations that force you to trade off one annoyance against another. There remains a non-technical limitation: I have assumed that contributors are comfortable enough with version control to use a history-editing workflow effectively. I’ve lost all perspective on how hard this is for a newbie to learn; I expect (or hope?) jj makes it much easier than git rebase.

7 hours ago 3 votes
Many Hard Leetcode Problems are Easy Constraint Problems

In my first interview out of college I was asked the change counter problem: Given a set of coin denominations, find the minimum number of coins required to make change for a given number. IE for USA coinage and 37 cents, the minimum number is four (quarter, dime, 2 pennies). I implemented the simple greedy algorithm and immediately fell into the trap of the question: the greedy algorithm only works for "well-behaved" denominations. If the coin values were [10, 9, 1], then making 37 cents would take 10 coins in the greedy algorithm but only 4 coins optimally (10+9+9+9). The "smart" answer is to use a dynamic programming algorithm, which I didn't know how to do. So I failed the interview. But you only need dynamic programming if you're writing your own algorithm. It's really easy if you throw it into a constraint solver like MiniZinc and call it a day. int: total; array[int] of int: values = [10, 9, 1]; array[index_set(values)] of var 0..: coins; constraint sum (c in index_set(coins)) (coins[c] * values[c]) == total; solve minimize sum(coins); You can try this online here. It'll give you a prompt to put in total and then give you successively-better solutions: coins = [0, 0, 37]; ---------- coins = [0, 1, 28]; ---------- coins = [0, 2, 19]; ---------- coins = [0, 3, 10]; ---------- coins = [0, 4, 1]; ---------- coins = [1, 3, 0]; ---------- Lots of similar interview questions are this kind of mathematical optimization problem, where we have to find the maximum or minimum of a function corresponding to constraints. They're hard in programming languages because programming languages are too low-level. They are also exactly the problems that constraint solvers were designed to solve. Hard leetcode problems are easy constraint problems.1 Here I'm using MiniZinc, but you could just as easily use Z3 or OR-Tools or whatever your favorite generalized solver is. More examples This was a question in a different interview (which I thankfully passed): Given a list of stock prices through the day, find maximum profit you can get by buying one stock and selling one stock later. It's easy to do in O(n^2) time, or if you are clever, you can do it in O(n). Or you could be not clever at all and just write it as a constraint problem: array[int] of int: prices = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5, 8]; var int: buy; var int: sell; var int: profit = prices[sell] - prices[buy]; constraint sell > buy; constraint profit > 0; solve maximize profit; Reminder, link to trying it online here. While working at that job, one interview question we tested out was: Given a list, determine if three numbers in that list can be added or subtracted to give 0? This is a satisfaction problem, not a constraint problem: we don't need the "best answer", any answer will do. We eventually decided against it for being too tricky for the engineers we were targeting. But it's not tricky in a solver; include "globals.mzn"; array[int] of int: numbers = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5, 8]; array[index_set(numbers)] of var {0, -1, 1}: choices; constraint sum(n in index_set(numbers)) (numbers[n] * choices[n]) = 0; constraint count(choices, -1) + count(choices, 1) = 3; solve satisfy; Okay, one last one, a problem I saw last year at Chipy AlgoSIG. Basically they pick some leetcode problems and we all do them. I failed to solve this one: Given an array of integers heights representing the histogram's bar height where the width of each bar is 1, return the area of the largest rectangle in the histogram. The "proper" solution is a tricky thing involving tracking lots of bookkeeping states, which you can completely bypass by expressing it as constraints: array[int] of int: numbers = [2,1,5,6,2,3]; var 1..length(numbers): x; var 1..length(numbers): dx; var 1..: y; constraint x + dx <= length(numbers); constraint forall (i in x..(x+dx)) (y <= numbers[i]); var int: area = (dx+1)*y; solve maximize area; output ["(\(x)->\(x+dx))*\(y) = \(area)"] There's even a way to automatically visualize the solution (using vis_geost_2d), but I didn't feel like figuring it out in time for the newsletter. Is this better? Now if I actually brought these questions to an interview the interviewee could ruin my day by asking "what's the runtime complexity?" Constraint solvers runtimes are unpredictable and almost always than an ideal bespoke algorithm because they are more expressive, in what I refer to as the capability/tractability tradeoff. But even so, they'll do way better than a bad bespoke algorithm, and I'm not experienced enough in handwriting algorithms to consistently beat a solver. The real advantage of solvers, though, is how well they handle new constraints. Take the stock picking problem above. I can write an O(n²) algorithm in a few minutes and the O(n) algorithm if you give me some time to think. Now change the problem to Maximize the profit by buying and selling up to max_sales stocks, but you can only buy or sell one stock at a given time and you can only hold up to max_hold stocks at a time? That's a way harder problem to write even an inefficient algorithm for! While the constraint problem is only a tiny bit more complicated: include "globals.mzn"; int: max_sales = 3; int: max_hold = 2; array[int] of int: prices = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5, 8]; array [1..max_sales] of var int: buy; array [1..max_sales] of var int: sell; array [index_set(prices)] of var 0..max_hold: stocks_held; var int: profit = sum(s in 1..max_sales) (prices[sell[s]] - prices[buy[s]]); constraint forall (s in 1..max_sales) (sell[s] > buy[s]); constraint profit > 0; constraint forall(i in index_set(prices)) (stocks_held[i] = (count(s in 1..max_sales) (buy[s] <= i) - count(s in 1..max_sales) (sell[s] <= i))); constraint alldifferent(buy ++ sell); solve maximize profit; output ["buy at \(buy)\n", "sell at \(sell)\n", "for \(profit)"]; Most constraint solving examples online are puzzles, like Sudoku or "SEND + MORE = MONEY". Solving leetcode problems would be a more interesting demonstration. And you get more interesting opportunities to teach optimizations, like symmetry breaking. Because my dad will email me if I don't explain this: "leetcode" is slang for "tricky algorithmic interview questions that have little-to-no relevance in the actual job you're interviewing for." It's from leetcode.com. ↩

19 hours ago 3 votes
ARM is great, ARM is terrible (and so is RISC-V)

I’ve long been interested in new and different platforms. I ran Debian on an Alpha back in the late 1990s and was part of the Alpha port team; then I helped bootstrap Debian on amd64. I’ve got somewhere around 8 Raspberry Pi devices in active use right now, and the free NNCPNET Internet email service … Continue reading ARM is great, ARM is terrible (and so is RISC-V) →

19 hours ago 2 votes
Stumbling upon

Something like a channel changer, for the web. That's what the idea was at first. But it led to a whole new path of discovery that even the site's creators couldn't have predicted. The post Stumbling upon appeared first on The History of the Web.

2 days ago 9 votes