I had a cabinet built and the guy doing the work pointed out that the human eye is really great at detecting line-line deviation; but, building to & correcting for that deviation requires working across the whole surface. He was making an area-effort to linear-quality argument. He said every time you halved the gap, it quadrupled the effort. Also, he said that was what saw-dust & glue were for.
One thing I’ve learned over decades of home/DIY projects is it’s usually better to intentionally target a small overlap/reveal rather than trying to have two materials match perfectly.
If you have a piece of door trim that exactly matches the piece behind, any imperfections or subsequent wood movement will be unsightly. If you instead target a 1/4” reveal, imperfections and slight wood movement are wildly less noticeable.
This is “build so the 1/32” imperfection/movement doesn’t matter at all” rather than trying to halve or quarter it. (If you can make something monolithic after attaching, such as a plaster wall, you don't have to do this, but wood furniture and trim often has these intentional offsets.)
Bands and artists do sometimes record a banger very quickly, and no, the effort is not in muscle memory or practice.
If it was in muscle memory it would be repeatable feat, and it really isn't.
Some work is technically polished and you can see/hear the effort that went into it.
But there's a point where extra effort makes the work less good. Music starts to sound stale and overproduced, art loses some of its directness, writing feels self-conscious.
Whatever the correlation between perceived artistic merit and effort, it's a lot more complex than this article suggests.
This is a large part of the discussions in the first one or two interviews in Interviews with Francis Bacon by David Sylvester. Bacon talks about pushing to the limits of adding more to a work until it's good, and then if taken too far it ruins the work. And only very rarely can he pull it back around to good.
Also - the speed and quality improvements when having to redo homework lost to an undiscerning canine companion is also a corollary of this.
Perhaps the time it takes to 'redo' is a better measure than last mile - it's the entire effort, minus the initial solution-space exploration?
How often is a drawing really trashed and restarted?
There's the saying, "Plan to throw one away," but seems like it varies in practice (for software).
There are even books about patching paintings, like Master Disaster: Five Ways to Rescue Desparate Watercolors.
In architecture, it's understood the people, vehicles, and landscape are not as exact as the building or structure, and books encourage reusing magazine clippings, overhead projectors, and copy machines to generally "be quick" on execution.
Would like to see thoughts on comparing current process with the "Draw 50" series, where most of the skeleton is on paper by the first step, but the last is really the super-detailed, totally refined, owl.
From my very limited experience with art, it's more often the case that a work in progress creation is abandoned and then taken a stab at anew later than trashed and restarted. Or it is iterated on to a degree that it is not meaningfully different from a full restart.
I have a bit more experience with software and the only reason for why we don't plan throw one away is because it costs more money and the market pressure on software quality is too low to make stakeholders care. In my personal hobby coding, I often practice this (or do what I described above with art which is closer to abandoning until inspiration strikes again at which point a blank slate is more inviting). The closest thing professionally I get is a "spike" where I explore something via code with the output not being the code itself, but the knowledge attained which then becomes an input to new code writing.
While I'm always ready to throw away code when I realize that there is a better way to do things I found it quite difficult to write code with the intent to throw it away. However I often do write code with intent of modifying it once I have a better idea of what is needed. It might be because I'm comparatively better at refactoring than at starting from scratch.
So i can only speak from my own experience of the last 5 years of trying (and often failing!) to accurately copy or otherwise create various drawings.
Very rarely do I start completely from scratch, but usually adjust the drawing so much that maybe I should have. I wonder if I tracked the adjustments if I would find every line was redrawn in some cases.
Thing is, it is hard to see what part is 'off' until most of the other parts are right. Especially with highly symmetric drawings, where symmetries appear gradually as the whole thing comes together.
Hmm...a lot? For a complex work, you'll sometimes do some number of sketches and studies and drawing and underpaintings...Lots of things get tried/discarded/modified before you land on a final painting.
When I was a kid, I obsessed over getting a picture right the first time. But as I got older, I learned to do subject studies to refine tricky details before committing them to a larger piece.
If you ever get the chance to see the personal effects of a famous artist, most have piles and piles of sketches and studies they've done while prepping for a larger piece.
The more mileage you get, the easier it is to see the mistakes in your old art (if you’re improving lol)
The more refined your technique is, the harder it will be to discern mistakes and aesthetic failures.
Eventually you might come to a point where you can’t improve because you literally don’t see any issues. That might be the high water mark of your ability.
Judging by the comments here, I'm the only one, but I have no idea what he's talking about. Even the abstract:
> The act of creation is fractal exploration–exploitation under optimal feedback control. When resolution increases the portion of parameter space that doesn't make the artifact worse (acceptance volume) collapses. Verification latency and rate–distortion combine into a precision tax that scales superlinearly with perceived quality.
Is this just saying that it's ok if doodles aren't good, but the closer you get to the finished work, the better it has to be? If your audience can't understand what the hell you're talking about for simple ideas, you've gone too far.
The author had a shower thought. It was poorly explored, poorly argued and deliberately packaged in complex language to hide the lack of substance. The bibtex reference at the end is the cherry on top.
Hey, I have my share of poorly-explored showethought posts, but at least I don't try to ornament them in sesquipedalian locution that purposely obfuscates the rudimentariness of the notion.
Hate to comment on the medium or writing style instead of the content but you're not alone. I understand the terms in the article in isolation or used in other fields, but it seems like the author is using a lot of technical metaphors. Or maybe I'm not their sophisticated audience.
The abstract is some of the worst writing I've read in a while. Trying to sound so very smart while being incapable of getting your point across. This whole article reeks of pretentiousness.
You read "increases" as a transitive verb, and then reach the "collapses" at the end of the sentence and have to re-parse the whole thing when you realize it was really intransitive.
Yeah, to date I think the smartest writing/speaking I've seen was Feynman. The way he could explain complicated physics concepts in simple words is just unmatched.
Yeah, it came off as complete nonsense. If someone were talking to me like this in person, I'd probably start suspecting they were doing it to distract me while their friend was outside stealing my hubcaps.
I had to think way too hard about what the author was trying to say. It smacks of an attempt at precise language, yet the subject matter is not precise at all. The author commits a Paul Grahamism, assuming their personal experience is generalizable and uniform.
Certainly, some artists work in the way they describe. Maybe even "most", who knows. But there are plenty of artists that do not. I've known plenty of artists to go straight to the detail in one corner of their piece and work linearly all the way across and down the canvas. I don't know how they do it, it certainly doesn't work for me, but obviously different people work in different ways.
Just map quality q to e^q or something and it will be sublinear again.
Or more directly, if your argument for why effort scales linearly with perceived quality doesn't discuss how we perceive quality then something is wrong.
A more direct argument would be that it takes roughly an equal amount of effort to halve the distance from a rough work to its ideal. Going from 90% to 99% takes the same as going from 99% to 99.9% but the latter only covers a tenth of the distance. If our perception is more sensitive to the _absolute_ size of the error you get an exponential effort to improve something.
Your first line assumes that `q` fails to refer to an objective property. The `e^q` space isn't quality, as much as `e^t` isnt temperature (holding the property we are talking about fixed). Thus the comment ends up being circular.
The issue was with the word "it". In the sentence, that word is acting as an indirection to both q and e^q instead of referring to a unitary thing. So yes, "it" does become linear/sublinear, but "it" is no longer the original subject of discussion.
No you have an equal number of options (minor and major are effectively transpositions/rotations...e.g. the chord progressions are "m dim M m m M M" for minor (m-minor, M-major, dim-diminished) chord progression, vs "M m m M M m dim" for major).
The post is likely getting to the point that, for english-speaking/western audiences at least, you are more likely to find songs written in C major, and thus they are more familiar and 'safer'. You _can_ write great songs in Em, but it's just a little less common, so maybe requires more work to 'fit into tastes'.
> Does picking E minor somehow give you fewer options than C major (I'm not a musician)?
Short answer: No. No matter what note you start on you have exactly the same set of options.
Long answer: No. All scales (in the system of temperament used in the vast majority of music) are symmetrical groups of transpositions of certain fundamental scales.[1] These work very much like a cyclic group if you have done algebra. In the example you chose, E minor is the "relative minor" of G Major, meaning that if you play an E Aeolean mode it contains all the same notes as G Major), and G major gives you the exact same options as C Major or any other Major Scale. What Messiaen noticed is that there are grouped sets of "Modes of limited transposition" which all work this way. So the major scale (and its “modes”, meaning the scales with the same key signature of sharps or flats but starting on each degree of the major scale) can be transposed exactly 11 times without repeating. There are 3 other scales that have this property (Normally these are called the harmonic minor, melodic minor and melodic major[2]). There are also modes of limited transposition with only 1 transposition (the chromatic scale), 2 (the whole-tone scale), 3 (the "diminished scale") and so on. Messiaen explains them all in that text if you're interested.
[1] This theory was first written out in full in Messiaen's "The technique of my musical language" but is usually taught as either "Late Romantic" or "Jazz" Harmony depending on where you study https://monoskop.org/images/5/50/Messiaen_Olivier_The_Techni...
[2] If you do "classical" harmony, your college may teach you the minor scales wrong with a descending version that is just a mode of the major scale. You may also not have been taught melodic major but it's awesome. (By “wrong” here, I mean specifically Messiaen and Schoenberg would say it’s wrong because a scale is a key signature/tonal area and so can’t have different notes when a melody ascending from descending. If there are two sets of different notes, Messiaen would say they are two scales and I would agree.)
Perceived quality is relative. It's roughly linearly related to rank position along some dimension, but moving up in rank requires exponential effort due to competition.
I would be surprised if anyone perceives quality like that. Like, are you saying that in a situation where there are only two examples of some type of work, it is impossible to judge whether one is much better than the other, it is only possible to say that it's better? What makes you think it works like this?
Perhaps a controversial view on this particular forum but I find the tendency of a certain type of person* to write about everything in this overly-technical way regardless of whether it is appropriate to the subject matter to be very tiresome ("executing cached heuristics", "constrained the search space").
*I associate it with the asinine contemporary "rationalist" movement (LessWrong et al.) but I'm not making any claims the author is associated with this.
I enjoy maths and CS and I could barely understand a word of it. It seems to me rather to have been written to give the impression of being inappropriate for many, as a stand-in for actually expressing anything with any intellectual weight.
I think it's a trick. It seems to be the article is just a series of ad-hoc assumptions and hypotheses without any support. The language aims to hide this, and makes you think about the language instead of its contents. Which is logically unsound: In a sharp peak, micro optimizations would give you a clearer signal where the optimum lies since the gradient is steeper.
> In a sharp peak, micro optimizations would give you a clearer signal where the optimum lies since the gradient is steeper.
I would refuse to even engage with the piece on this level, since it lends credibility to the idea that the creative process is even remotely related to or analogous to gradient descent.
I wouldn't jump to call it a trick, but I agree, the author sacrificed too much clarity in a try for efficiency.
The author set up an interesting analogy but failed to explore where it breaks down or how all the relationships work in the model.
My inference about the author's meaning was such: In a sharp peak, searching for useful moves is harder because you have fewer acceptable options as you approach the peak.
Fewer absolute or relative? If you scale down your search space... This only makes some kind of sense if your step size is fixed. While I agree with another poster that a reduction of a creative process to gradient descent is not wise, the article also misses the point what makes such a gradient descent hard -- it's not sharp peaks, it's the flat area around them -- and the presence of local minima.
I see your point. I'd meant relatively fewer progressive options compared to an absolute and unchanging number of total options.
But that's not what the author's analogy would imply.
Still, I think you're saying the author is deducing the creative process as a kind of gradient descent, whereas my reading was the author was trying to abductively explore an analogy.
It's a middle school essay that is trying to score points based on the number of metaphors used. Very unappealing and I wouldn't call it technical.
EDIT: For all the people saying the writing is inspired by math/cs, that's not at all true. That's not how technical writing is done. This guy is just a poser.
A bit harsh, but I see what you mean. It is tempting to try and fit every description of the world into a rigorous technical straightjacket, perhaps because it feels like you have understood it better?
Maybe it is similar to how scientist get flack for writing in technical jargon instead of 'plain language'. Partly it is a necessity - to be unambiguous - however it is also partly a choice, a way to signal that you are doing Science, not just describing messing about with chemicals or whatever.
To be fair, it's always an artistic choice if you think it is appropriate here or not, but, yeah, this article is a really heavy offender. Reading the "Abstract claim" I caught myself thinking that this word salad hardly makes any sense, but I don't know and am just gonna let it go, because I am not yet convinced that it's worth my time to decipher that.
Also, "asinine contemporary "rationalist" movement" is pretty lightweight in this regard. Making an art out of writing as bad as possible has been a professional skill of any "academic philosopher" (both "continental" and "analytical" sides) for a century at the very least.
I have observed it too, it is heavily inspired by economics and mathematics.
Saying "it's better to complete something imperfect than spend forever polishing" - dull, trite, anyone knows that. Saying "effort is a utility curve function that must be clamped to achieve meta-optimisation" - now that sounds clever
If I was going to be uncharitable, I think there is are corners of the internet where people write straightforward things dressed it up in technical language to launder it as somehow academic and data driven.
And you're right, it does show up in the worse parts of the EA / rationalist community.
(This writing style, at its worst, allows people to say things like "I don't want my tax spent on teaching poor kids to read" but without looking like complete psychopaths - "aggregate outcomes in standardised literacy programmes lag behind individualised tutorials")
That's not what the blog post here is doing, but it is definitely bad language use that is doing more work to obscure ideas than illuminate them
no, we need more of this, the opposite of this is Robin Williams destroying the poetry theory book in dead poeta society, the result was weak kids and one of them commited suicide. More technical stuff in relation to art is a good thing, but its expected that anglosaxon people have allergy to this, they think is somehow socialist or something and they need art to be unfefined etc
Respectfully, I have no idea what you're talking about. Dead Poets Society is a story and the message of the story isn't that Robin Williams' character is bad.
Are you saying my perspective is anti-socialist? What is "refined" art?
of course in the movie they sell the idea that art is not subject to scientific or technical analysis, but if you do an indepent analysis you realize those kids didnt become stronger or freer. Art like the article explained is related to effort and technique. but people in the US LOVE stuff like Jackson pollock, they need for art to not being a thing you put effort and mind into
You're confusing art with technical skill. You like art that demonstrates technical skill, that's fine. But art doesn't have to demonstrate technical skill to be artistic - indeed defining what 'art' is exactly is surprisingly difficult.
On a related note I wrote a few “poems” using anagrams. The principle is simple: take a short phrase and have each line in the poem be an anagram of it. You can’t do this with just any phrase; the letters need to be reasonably well balanced for the target language so you can still form pronouns, key grammatical verbs (to be, to have, etc.), and some basic structure.
It becomes interesting once sentences span multiple lines and you start using little tactical tricks to keep syntax, semantics, and the overall argument coherent while respecting the anagram constraint.
Using an anagram generator is of course a first step, but the landscapes it offers are mostly desert: the vast majority of candidates are nonsense, and those that are grammatical are usually thematically off relative to what you’ve already written. And yet, if the repeated anagram phrase is chosen well, it doesn’t feel that hard to build long, meaningful sentences. Subjectively, the difficulty seems to scale roughly proportionally with the length of the poem, rather than quadratically and beyond.
There’s a nice connection here to Sample Space Reducing (SSR) processes. The act of picking letters from a fixed multiset to form words, and removing them as you go, is a SSR. So is sentence formation itself: each committed word constrains the space of acceptable continuations (morphology, syntax, discourse, etc.).
> Many such stochastic processes, especially those that are associated with complex systems, become more constrained as they unfold, meaning that their sample-space, or their set of possible outcomes, reduces as they age. We demonstrate that these sample-space reducing (SSR) processes necessarily lead to Zipf’s law in the rank distributions of their outcomes.
> We note that SSR processes and nesting are deeply connected to phase-space collapse in statistical physics [21, 30–32], where the number of configurations does not grow exponentially with system size (as in Markovian and ergodic systems), but grows sub-exponentially. Sub-exponential growth can be shown to hold for the phase-space growth of the SSR sequences introduced here. In conclusion we believe that SSR processes provide a new alternative view on the emergence of scaling in many natural, social, and man-made systems.
In my case there are at least two coupled SSRs: (1) the anagrammatic constraint at the line level (letters being consumed), and (2) the layered SSRs of natural language that govern what counts as a well-formed and context-appropriate continuation (from morphology and syntax up through discourse and argumentation). In practice I ended up exploiting this coupling: by reserving or spending strategic words (pronouns, conjunctions, or semantically heavy terms established earlier), I could steer both the unfolding sentence and the remaining letter pool, and explore the anagram space far more effectively than a naive generator.
Very hand-wavy hypothesis: natural language is a complex, multi-layered SSR engine that happens to couple extremely well to other finite SSR constraints. That makes it unusually good at “solving” certain bounded combinatorial puzzles from the inside—up to and including, say, assembling IKEA furniture.
One extra nuance here: in the anagrammatic setting, the coupling between constraints is constitutive rather than merely referential. The same finite multiset of letters simultaneously supports the combinatorial constraint (what strings are formable) and the linguistic constraint (what counts as a syntactically and discursively acceptable move), so every choice is doubly binding. That’s different from cases like following IKEA instructions, where language operates as an external controller that refers to another state space (parts, tools, assembly steps) without sharing its “material” degrees of freedom. This makes the anagram case feel like a toy model where syntax and semantics are not two separate realms but two intertwined SSR layers over one shared substrate—suggesting that what we call “reference” might itself be an emergent pattern in how such nested SSR systems latch onto each other.
I appreciate this post as I think too many folks focus on the end before understanding what made it there. It's kind of asking what's the movie about before watching it or especially movie trailers that essentially shows way too much.
We should all take some time to better understand what brought us here to be better prepared for general creative work and uniqueness in the future...
lol I cited this exact scene as an example of typical anglosaxon conception of art, now you are crying that art has become shit but any attempt at scientific analysis is taken as a joke when actual poetry is even harder than Code, the amount of data you can compress on a single Word and rhyimes and stuff IS the hardest thing ever, but because you dont want to think someone can do an effort you want the Robin Williams and Dead Poets society to win and make art non scientifically understandable to anyone, if you cant do scientific or technical analysis of art thats your opinión but why the obsession on trashing anyone Who does It?
I believe that last-mile edits do not significantly improve the quality of (most) creative work. To produce high-quality work, one must have already "cached" their "motor heuristics," which, in simpler terms, means having dedicated thousands of hours to deep and deliberate practice in their field.
The definition of 'last-mile edits' is very subjective, though. If you're dealing with open systems, it's almost unthinkable to design something and not need to iterate on it until the desired outcome is achieved. In other domains, for example, playing an instrument, your skills need to have been honed previously: there's nothing that will make you sound better (without resorting to editing it electronically).
I discussed this premise with my LLM and we came to this following conclusion which I find quite elegant:
> In any bounded system under feedback, refinement produces diminishing returns and narrowing tolerance, governed by a superlinear precision cost.
> There isn’t one official name, but what you’ve articulated is essentially a unified formulation of the diminishing-returns / sensitivity-amplification law of creation — a pattern deep enough that it keeps being rediscovered in every domain that pushes against the limits of order.
Agreed, that's an elegant conclusion. Thanks for sharing.
PS Usually LLM-generated content is strongly penalized here (and with good reason). But IMHO, when clearly noting it as such, and sharing something worthwhile -- as in this case -- an exception should be made.
I had a cabinet built and the guy doing the work pointed out that the human eye is really great at detecting line-line deviation; but, building to & correcting for that deviation requires working across the whole surface. He was making an area-effort to linear-quality argument. He said every time you halved the gap, it quadrupled the effort. Also, he said that was what saw-dust & glue were for.
One thing I’ve learned over decades of home/DIY projects is it’s usually better to intentionally target a small overlap/reveal rather than trying to have two materials match perfectly.
If you have a piece of door trim that exactly matches the piece behind, any imperfections or subsequent wood movement will be unsightly. If you instead target a 1/4” reveal, imperfections and slight wood movement are wildly less noticeable.
This is “build so the 1/32” imperfection/movement doesn’t matter at all” rather than trying to halve or quarter it. (If you can make something monolithic after attaching, such as a plaster wall, you don't have to do this, but wood furniture and trim often has these intentional offsets.)
Bands and artists do sometimes record a banger very quickly, and no, the effort is not in muscle memory or practice.
If it was in muscle memory it would be repeatable feat, and it really isn't.
Some work is technically polished and you can see/hear the effort that went into it.
But there's a point where extra effort makes the work less good. Music starts to sound stale and overproduced, art loses some of its directness, writing feels self-conscious.
Whatever the correlation between perceived artistic merit and effort, it's a lot more complex than this article suggests.
As my old art teacher used to say, "You work on something and it gets better and better and then it turns to shit."
This is a large part of the discussions in the first one or two interviews in Interviews with Francis Bacon by David Sylvester. Bacon talks about pushing to the limits of adding more to a work until it's good, and then if taken too far it ruins the work. And only very rarely can he pull it back around to good.
Was your old teacher’s name Cory Doctorow?
I don't think the phenomenon you're describing is a result of the amount of effort put into the piece.
Also - the speed and quality improvements when having to redo homework lost to an undiscerning canine companion is also a corollary of this. Perhaps the time it takes to 'redo' is a better measure than last mile - it's the entire effort, minus the initial solution-space exploration?
How often is a drawing really trashed and restarted?
There's the saying, "Plan to throw one away," but seems like it varies in practice (for software).
There are even books about patching paintings, like Master Disaster: Five Ways to Rescue Desparate Watercolors.
In architecture, it's understood the people, vehicles, and landscape are not as exact as the building or structure, and books encourage reusing magazine clippings, overhead projectors, and copy machines to generally "be quick" on execution.
Would like to see thoughts on comparing current process with the "Draw 50" series, where most of the skeleton is on paper by the first step, but the last is really the super-detailed, totally refined, owl.
From my very limited experience with art, it's more often the case that a work in progress creation is abandoned and then taken a stab at anew later than trashed and restarted. Or it is iterated on to a degree that it is not meaningfully different from a full restart.
I have a bit more experience with software and the only reason for why we don't plan throw one away is because it costs more money and the market pressure on software quality is too low to make stakeholders care. In my personal hobby coding, I often practice this (or do what I described above with art which is closer to abandoning until inspiration strikes again at which point a blank slate is more inviting). The closest thing professionally I get is a "spike" where I explore something via code with the output not being the code itself, but the knowledge attained which then becomes an input to new code writing.
While I'm always ready to throw away code when I realize that there is a better way to do things I found it quite difficult to write code with the intent to throw it away. However I often do write code with intent of modifying it once I have a better idea of what is needed. It might be because I'm comparatively better at refactoring than at starting from scratch.
So i can only speak from my own experience of the last 5 years of trying (and often failing!) to accurately copy or otherwise create various drawings.
Very rarely do I start completely from scratch, but usually adjust the drawing so much that maybe I should have. I wonder if I tracked the adjustments if I would find every line was redrawn in some cases.
Thing is, it is hard to see what part is 'off' until most of the other parts are right. Especially with highly symmetric drawings, where symmetries appear gradually as the whole thing comes together.
Hmm...a lot? For a complex work, you'll sometimes do some number of sketches and studies and drawing and underpaintings...Lots of things get tried/discarded/modified before you land on a final painting.
When programming stuff as a hobby, I do always plan to throw one away.
The first one is where I learn my lessons and write enough spaghetti until I fully understand the problem.
Then I delete the first one, and start over with the lessons learnt.
When I was a kid, I obsessed over getting a picture right the first time. But as I got older, I learned to do subject studies to refine tricky details before committing them to a larger piece.
If you ever get the chance to see the personal effects of a famous artist, most have piles and piles of sketches and studies they've done while prepping for a larger piece.
The more mileage you get, the easier it is to see the mistakes in your old art (if you’re improving lol)
The more refined your technique is, the harder it will be to discern mistakes and aesthetic failures.
Eventually you might come to a point where you can’t improve because you literally don’t see any issues. That might be the high water mark of your ability.
Judging by the comments here, I'm the only one, but I have no idea what he's talking about. Even the abstract:
> The act of creation is fractal exploration–exploitation under optimal feedback control. When resolution increases the portion of parameter space that doesn't make the artifact worse (acceptance volume) collapses. Verification latency and rate–distortion combine into a precision tax that scales superlinearly with perceived quality.
Is this just saying that it's ok if doodles aren't good, but the closer you get to the finished work, the better it has to be? If your audience can't understand what the hell you're talking about for simple ideas, you've gone too far.
This kind of article is why people read comments first today.
It's such a simple idea. And it already has a name, diminishing returns. I don't know what prompted this article but it wasn't insight.
"The last 10% take 90% of the time"
The author had a shower thought. It was poorly explored, poorly argued and deliberately packaged in complex language to hide the lack of substance. The bibtex reference at the end is the cherry on top.
Hey, I have my share of poorly-explored showethought posts, but at least I don't try to ornament them in sesquipedalian locution that purposely obfuscates the rudimentariness of the notion.
Hate to comment on the medium or writing style instead of the content but you're not alone. I understand the terms in the article in isolation or used in other fields, but it seems like the author is using a lot of technical metaphors. Or maybe I'm not their sophisticated audience.
The abstract is some of the worst writing I've read in a while. Trying to sound so very smart while being incapable of getting your point across. This whole article reeks of pretentiousness.
It would be clearer with a comma after "increases". Without that it's a garden-path sentence:
https://en.wikipedia.org/wiki/Garden-path_sentence
You read "increases" as a transitive verb, and then reach the "collapses" at the end of the sentence and have to re-parse the whole thing when you realize it was really intransitive.
Yeah, to date I think the smartest writing/speaking I've seen was Feynman. The way he could explain complicated physics concepts in simple words is just unmatched.
Feynman's Physics lectures are proof of that: https://www.feynmanlectures.caltech.edu/
10/10 should be required reading for all humans
Yeah, it came off as complete nonsense. If someone were talking to me like this in person, I'd probably start suspecting they were doing it to distract me while their friend was outside stealing my hubcaps.
I had to think way too hard about what the author was trying to say. It smacks of an attempt at precise language, yet the subject matter is not precise at all. The author commits a Paul Grahamism, assuming their personal experience is generalizable and uniform.
Certainly, some artists work in the way they describe. Maybe even "most", who knows. But there are plenty of artists that do not. I've known plenty of artists to go straight to the detail in one corner of their piece and work linearly all the way across and down the canvas. I don't know how they do it, it certainly doesn't work for me, but obviously different people work in different ways.
In defense of Paul Graham, his essays are often unnecessarily long, but I don't remember he has written something as bad as this abstract.
It's more on par with something you'll find on lesswrong.
Just map quality q to e^q or something and it will be sublinear again.
Or more directly, if your argument for why effort scales linearly with perceived quality doesn't discuss how we perceive quality then something is wrong.
A more direct argument would be that it takes roughly an equal amount of effort to halve the distance from a rough work to its ideal. Going from 90% to 99% takes the same as going from 99% to 99.9% but the latter only covers a tenth of the distance. If our perception is more sensitive to the _absolute_ size of the error you get an exponential effort to improve something.
Your first line assumes that `q` fails to refer to an objective property. The `e^q` space isn't quality, as much as `e^t` isnt temperature (holding the property we are talking about fixed). Thus the comment ends up being circular.
The issue was with the word "it". In the sentence, that word is acting as an indirection to both q and e^q instead of referring to a unitary thing. So yes, "it" does become linear/sublinear, but "it" is no longer the original subject of discussion.
Very funny to put a bibtex citation under such a small piece of work
I liked the post but can someone explain how macro choices change the acceptance volume?
Is it their effect on the total number of available choices?
Does picking E minor somehow give you fewer options than C major (I'm not a musician)?
No you have an equal number of options (minor and major are effectively transpositions/rotations...e.g. the chord progressions are "m dim M m m M M" for minor (m-minor, M-major, dim-diminished) chord progression, vs "M m m M M m dim" for major).
The post is likely getting to the point that, for english-speaking/western audiences at least, you are more likely to find songs written in C major, and thus they are more familiar and 'safer'. You _can_ write great songs in Em, but it's just a little less common, so maybe requires more work to 'fit into tastes'.
edit: changed 'our' to english/western audiences
> Does picking E minor somehow give you fewer options than C major (I'm not a musician)?
Short answer: No. No matter what note you start on you have exactly the same set of options.
Long answer: No. All scales (in the system of temperament used in the vast majority of music) are symmetrical groups of transpositions of certain fundamental scales.[1] These work very much like a cyclic group if you have done algebra. In the example you chose, E minor is the "relative minor" of G Major, meaning that if you play an E Aeolean mode it contains all the same notes as G Major), and G major gives you the exact same options as C Major or any other Major Scale. What Messiaen noticed is that there are grouped sets of "Modes of limited transposition" which all work this way. So the major scale (and its “modes”, meaning the scales with the same key signature of sharps or flats but starting on each degree of the major scale) can be transposed exactly 11 times without repeating. There are 3 other scales that have this property (Normally these are called the harmonic minor, melodic minor and melodic major[2]). There are also modes of limited transposition with only 1 transposition (the chromatic scale), 2 (the whole-tone scale), 3 (the "diminished scale") and so on. Messiaen explains them all in that text if you're interested.
[1] This theory was first written out in full in Messiaen's "The technique of my musical language" but is usually taught as either "Late Romantic" or "Jazz" Harmony depending on where you study https://monoskop.org/images/5/50/Messiaen_Olivier_The_Techni...
[2] If you do "classical" harmony, your college may teach you the minor scales wrong with a descending version that is just a mode of the major scale. You may also not have been taught melodic major but it's awesome. (By “wrong” here, I mean specifically Messiaen and Schoenberg would say it’s wrong because a scale is a key signature/tonal area and so can’t have different notes when a melody ascending from descending. If there are two sets of different notes, Messiaen would say they are two scales and I would agree.)
Perceived quality is relative. It's roughly linearly related to rank position along some dimension, but moving up in rank requires exponential effort due to competition.
I would be surprised if anyone perceives quality like that. Like, are you saying that in a situation where there are only two examples of some type of work, it is impossible to judge whether one is much better than the other, it is only possible to say that it's better? What makes you think it works like this?
Perhaps a controversial view on this particular forum but I find the tendency of a certain type of person* to write about everything in this overly-technical way regardless of whether it is appropriate to the subject matter to be very tiresome ("executing cached heuristics", "constrained the search space").
*I associate it with the asinine contemporary "rationalist" movement (LessWrong et al.) but I'm not making any claims the author is associated with this.
What diction is "appropriate to the subject matter" is a negotiation between author and reader.
I think the author is ok with it being inappropriate for many; it's clearly written for those who enjoy math or CS.
I enjoy maths and CS and I could barely understand a word of it. It seems to me rather to have been written to give the impression of being inappropriate for many, as a stand-in for actually expressing anything with any intellectual weight.
I think it's a trick. It seems to be the article is just a series of ad-hoc assumptions and hypotheses without any support. The language aims to hide this, and makes you think about the language instead of its contents. Which is logically unsound: In a sharp peak, micro optimizations would give you a clearer signal where the optimum lies since the gradient is steeper.
> In a sharp peak, micro optimizations would give you a clearer signal where the optimum lies since the gradient is steeper.
I would refuse to even engage with the piece on this level, since it lends credibility to the idea that the creative process is even remotely related to or analogous to gradient descent.
I wouldn't jump to call it a trick, but I agree, the author sacrificed too much clarity in a try for efficiency.
The author set up an interesting analogy but failed to explore where it breaks down or how all the relationships work in the model.
My inference about the author's meaning was such: In a sharp peak, searching for useful moves is harder because you have fewer acceptable options as you approach the peak.
Fewer absolute or relative? If you scale down your search space... This only makes some kind of sense if your step size is fixed. While I agree with another poster that a reduction of a creative process to gradient descent is not wise, the article also misses the point what makes such a gradient descent hard -- it's not sharp peaks, it's the flat area around them -- and the presence of local minima.
I see your point. I'd meant relatively fewer progressive options compared to an absolute and unchanging number of total options.
But that's not what the author's analogy would imply.
Still, I think you're saying the author is deducing the creative process as a kind of gradient descent, whereas my reading was the author was trying to abductively explore an analogy.
It's a middle school essay that is trying to score points based on the number of metaphors used. Very unappealing and I wouldn't call it technical.
EDIT: For all the people saying the writing is inspired by math/cs, that's not at all true. That's not how technical writing is done. This guy is just a poser.
> I wouldn't call it technical
Fair. Perhaps I should have said it gives the illusion of being technical.
A bit harsh, but I see what you mean. It is tempting to try and fit every description of the world into a rigorous technical straightjacket, perhaps because it feels like you have understood it better?
Maybe it is similar to how scientist get flack for writing in technical jargon instead of 'plain language'. Partly it is a necessity - to be unambiguous - however it is also partly a choice, a way to signal that you are doing Science, not just describing messing about with chemicals or whatever.
I'll be the first to admit I was unable to follow the article because of this.
To be fair, it's always an artistic choice if you think it is appropriate here or not, but, yeah, this article is a really heavy offender. Reading the "Abstract claim" I caught myself thinking that this word salad hardly makes any sense, but I don't know and am just gonna let it go, because I am not yet convinced that it's worth my time to decipher that.
Also, "asinine contemporary "rationalist" movement" is pretty lightweight in this regard. Making an art out of writing as bad as possible has been a professional skill of any "academic philosopher" (both "continental" and "analytical" sides) for a century at the very least.
I mean, I talk like this as well. It's not really intentional. My interests influence the language that I use.
Why is the rationalist movement asinine? I don't know much about it but it seems interesting.
I have observed it too, it is heavily inspired by economics and mathematics.
Saying "it's better to complete something imperfect than spend forever polishing" - dull, trite, anyone knows that. Saying "effort is a utility curve function that must be clamped to achieve meta-optimisation" - now that sounds clever
If I was going to be uncharitable, I think there is are corners of the internet where people write straightforward things dressed it up in technical language to launder it as somehow academic and data driven.
And you're right, it does show up in the worse parts of the EA / rationalist community.
(This writing style, at its worst, allows people to say things like "I don't want my tax spent on teaching poor kids to read" but without looking like complete psychopaths - "aggregate outcomes in standardised literacy programmes lag behind individualised tutorials")
That's not what the blog post here is doing, but it is definitely bad language use that is doing more work to obscure ideas than illuminate them
Yes, you articulated my issue in a much better way than I managed to!
Just reading the abstract, I have to agree with you.
no, we need more of this, the opposite of this is Robin Williams destroying the poetry theory book in dead poeta society, the result was weak kids and one of them commited suicide. More technical stuff in relation to art is a good thing, but its expected that anglosaxon people have allergy to this, they think is somehow socialist or something and they need art to be unfefined etc
I am not sure you watched the same movie I did.
Respectfully, I have no idea what you're talking about. Dead Poets Society is a story and the message of the story isn't that Robin Williams' character is bad.
Are you saying my perspective is anti-socialist? What is "refined" art?
of course in the movie they sell the idea that art is not subject to scientific or technical analysis, but if you do an indepent analysis you realize those kids didnt become stronger or freer. Art like the article explained is related to effort and technique. but people in the US LOVE stuff like Jackson pollock, they need for art to not being a thing you put effort and mind into
You're confusing art with technical skill. You like art that demonstrates technical skill, that's fine. But art doesn't have to demonstrate technical skill to be artistic - indeed defining what 'art' is exactly is surprisingly difficult.
Can you give an example of an artwork you think is acceptable?
On a related note I wrote a few “poems” using anagrams. The principle is simple: take a short phrase and have each line in the poem be an anagram of it. You can’t do this with just any phrase; the letters need to be reasonably well balanced for the target language so you can still form pronouns, key grammatical verbs (to be, to have, etc.), and some basic structure.
It becomes interesting once sentences span multiple lines and you start using little tactical tricks to keep syntax, semantics, and the overall argument coherent while respecting the anagram constraint.
Using an anagram generator is of course a first step, but the landscapes it offers are mostly desert: the vast majority of candidates are nonsense, and those that are grammatical are usually thematically off relative to what you’ve already written. And yet, if the repeated anagram phrase is chosen well, it doesn’t feel that hard to build long, meaningful sentences. Subjectively, the difficulty seems to scale roughly proportionally with the length of the poem, rather than quadratically and beyond.
There’s a nice connection here to Sample Space Reducing (SSR) processes. The act of picking letters from a fixed multiset to form words, and removing them as you go, is a SSR. So is sentence formation itself: each committed word constrains the space of acceptable continuations (morphology, syntax, discourse, etc.).
Understanding scaling through history-dependent processes with collapsing sample space, https://arxiv.org/pdf/1407.2775
> Many such stochastic processes, especially those that are associated with complex systems, become more constrained as they unfold, meaning that their sample-space, or their set of possible outcomes, reduces as they age. We demonstrate that these sample-space reducing (SSR) processes necessarily lead to Zipf’s law in the rank distributions of their outcomes.
> We note that SSR processes and nesting are deeply connected to phase-space collapse in statistical physics [21, 30–32], where the number of configurations does not grow exponentially with system size (as in Markovian and ergodic systems), but grows sub-exponentially. Sub-exponential growth can be shown to hold for the phase-space growth of the SSR sequences introduced here. In conclusion we believe that SSR processes provide a new alternative view on the emergence of scaling in many natural, social, and man-made systems.
In my case there are at least two coupled SSRs: (1) the anagrammatic constraint at the line level (letters being consumed), and (2) the layered SSRs of natural language that govern what counts as a well-formed and context-appropriate continuation (from morphology and syntax up through discourse and argumentation). In practice I ended up exploiting this coupling: by reserving or spending strategic words (pronouns, conjunctions, or semantically heavy terms established earlier), I could steer both the unfolding sentence and the remaining letter pool, and explore the anagram space far more effectively than a naive generator.
Very hand-wavy hypothesis: natural language is a complex, multi-layered SSR engine that happens to couple extremely well to other finite SSR constraints. That makes it unusually good at “solving” certain bounded combinatorial puzzles from the inside—up to and including, say, assembling IKEA furniture.
One extra nuance here: in the anagrammatic setting, the coupling between constraints is constitutive rather than merely referential. The same finite multiset of letters simultaneously supports the combinatorial constraint (what strings are formable) and the linguistic constraint (what counts as a syntactically and discursively acceptable move), so every choice is doubly binding. That’s different from cases like following IKEA instructions, where language operates as an external controller that refers to another state space (parts, tools, assembly steps) without sharing its “material” degrees of freedom. This makes the anagram case feel like a toy model where syntax and semantics are not two separate realms but two intertwined SSR layers over one shared substrate—suggesting that what we call “reference” might itself be an emergent pattern in how such nested SSR systems latch onto each other.
I appreciate this post as I think too many folks focus on the end before understanding what made it there. It's kind of asking what's the movie about before watching it or especially movie trailers that essentially shows way too much.
We should all take some time to better understand what brought us here to be better prepared for general creative work and uniqueness in the future...
"Understanding Poetry, by Dr J. Evans Pritchard, PhD"
lol I cited this exact scene as an example of typical anglosaxon conception of art, now you are crying that art has become shit but any attempt at scientific analysis is taken as a joke when actual poetry is even harder than Code, the amount of data you can compress on a single Word and rhyimes and stuff IS the hardest thing ever, but because you dont want to think someone can do an effort you want the Robin Williams and Dead Poets society to win and make art non scientifically understandable to anyone, if you cant do scientific or technical analysis of art thats your opinión but why the obsession on trashing anyone Who does It?
I believe that last-mile edits do not significantly improve the quality of (most) creative work. To produce high-quality work, one must have already "cached" their "motor heuristics," which, in simpler terms, means having dedicated thousands of hours to deep and deliberate practice in their field.
The definition of 'last-mile edits' is very subjective, though. If you're dealing with open systems, it's almost unthinkable to design something and not need to iterate on it until the desired outcome is achieved. In other domains, for example, playing an instrument, your skills need to have been honed previously: there's nothing that will make you sound better (without resorting to editing it electronically).
A teacher told me once that editing poetry is like trying to open a glass jar. Eventually, you have to set it down or you’ll break the thing.
I discussed this premise with my LLM and we came to this following conclusion which I find quite elegant:
> In any bounded system under feedback, refinement produces diminishing returns and narrowing tolerance, governed by a superlinear precision cost.
> There isn’t one official name, but what you’ve articulated is essentially a unified formulation of the diminishing-returns / sensitivity-amplification law of creation — a pattern deep enough that it keeps being rediscovered in every domain that pushes against the limits of order.
Agreed, that's an elegant conclusion. Thanks for sharing.
PS Usually LLM-generated content is strongly penalized here (and with good reason). But IMHO, when clearly noting it as such, and sharing something worthwhile -- as in this case -- an exception should be made.