Polish blog posts for new design

This commit is contained in:
Shadowfacts 2025-02-12 10:50:02 -05:00
parent 471887c885
commit e2abb6873f
26 changed files with 94 additions and 111 deletions

View File

@ -71,6 +71,10 @@ blockquote {
left: -25px;
top: -10px;
}
em {
font-style: normal;
}
}
img {
@ -119,6 +123,10 @@ hr {
border: 1px solid var(--secondary-text-color);
}
.italic {
font-style: italic;
}
html {
font-family: "Valkyrie A", Charter, serif;
font-size: 16px;
@ -176,6 +184,7 @@ header {
.body-content {
font-size: 1.25rem;
line-height: 1.4;
// Chrome only, but minimizes orphan words
text-wrap: pretty;
}
@ -223,7 +232,6 @@ aside:not(.inline) {
margin-right: -50%;
width: 40%;
font-size: 1rem;
line-height: 1.25;
color: var(--secondary-text-color);
transition: color 0.2s ease-in-out;
display: block;
@ -318,7 +326,6 @@ aside.inline {
float: none;
margin-right: 0;
width: auto;
line-height: initial;
transform: none;
background-color: lighten($link-color, 43%);
padding: 1rem;

View File

@ -52,6 +52,9 @@
</span>
</p>
<div class="body-content" itemprop="articleBody">
{% if metadata.preamble %}
{{ metadata.preamble }}
{% endif %}
{{ content }}
</div>
</article>

View File

@ -1,6 +1,6 @@
```
title = "Algorithmic Bias"
tags = ["misc", "social media"]
tags = ["politics", "social media"]
date = "2020-06-05 09:55:42 -0400"
slug = "algorithmic-bias"
```
@ -15,4 +15,3 @@ This is what algorithmic bias looks like. **Algorithms are not neutral.**[^1]
</figure>
[^1]: "Algorithm" is a word here used not in the purely computer science sense, but to mean a element of software which operates in a black box, often with a machine learning component, with little or no human supervision, input, or control.

View File

@ -10,9 +10,9 @@ On and off for the past year and a half or so, I've been working on a small side
I knew that MP3 files had some embedded metadata, only for the reason that looking at most tracks in Finder shows album artwork and information about the track. Cursory googling led me to the [ID3 spec](https://id3.org/).
[^1]: Actual, DRM-free files because music streaming services by and large don't pay artists fairly[^2]. MP3s specifically because they Just Work everywhere, and I cannot for the life of me hear the difference between a 320kbps MP3 and an \<insert audiophile format of choice> file.
[^2]: Spotify pays artists 0.38¢ per play and Apple Music pays 0.783¢ per play ([source](https://help.songtrust.com/knowledge/what-is-the-pay-rate-for-spotify-streams)). For an album of 12 songs that costs $10 (assuming wherever you buy it from takes a 30% cut), you would have to listen all the way through it between 75 and 150 times for the artist to receive as much money as if you had just purchased the album outright. That's hardly fair and is not sustainable for all but the largest of musicians.
[^1]: Actual, DRM-free files because music streaming services by and large don't pay artists fairly. MP3s specifically because they Just Work everywhere, and I cannot for the life of me hear the difference between a 320kbps MP3 and an \<insert audiophile format of choice> file.
<br><br>
Spotify pays artists 0.38¢ per play and Apple Music pays 0.783¢ per play ([source](https://help.songtrust.com/knowledge/what-is-the-pay-rate-for-spotify-streams)). For an album of 12 songs that costs $10 (assuming wherever you buy it from takes a 30% cut), you would have to listen all the way through it between 75 and 150 times for the artist to receive as much money as if you had just purchased the album outright. That's hardly fair and is not sustainable for all but the largest of musicians.
<!-- excerpt-end -->
@ -489,4 +489,3 @@ iex> ID3.parse_tag(data)
```
One of the pieces of information I was hoping I could get from the ID3 tags was the durations of the MP3s in my library. But alas, none of the tracks I have use the TLEN frame, so it looks like I'll have to try and pull that data out of the MP3 myself. But that's a post for another time...

View File

@ -4,7 +4,7 @@ tags = ["build a programming language", "rust"]
date = "2021-04-13 17:00:42 -0400"
short_desc = "Turning a string into a sequence of tokens."
slug = "lexing"
preamble = '<p style="font-style: italic;">This post is part of a <a href="/build-a-programming-language/" data-link="/build-a-programming-language/">series</a> about learning Rust and building a small programming language.</p><hr>'
preamble = '<p class="italic">This post is part of a <a href="/build-a-programming-language/">series</a> about learning Rust and building a small programming language.</p><hr>'
```
The first part of the language I've built is the lexer. It takes the program text as input and produces a vector of tokens. Tokens are the individual units that the parser will work with, rather than it having to work directly with characters. A token could be a bunch of different things. It could be a literal value (like a number or string), or it could be an identifier, or a specific symbol (like a plus sign).
@ -96,5 +96,3 @@ fn main() {
$ cargo run
tokens: [Integer(12), Plus, Integer(34)]
```

View File

@ -4,7 +4,7 @@ tags = ["build a programming language", "rust"]
date = "2021-04-14 17:00:42 -0400"
short_desc = "Building a small AST from the stream of tokens."
slug = "parsing"
preamble = '<p style="font-style: italic;">This post is part of a <a href="/build-a-programming-language/" data-link="/build-a-programming-language/">series</a> about learning Rust and building a small programming language.</p><hr>'
preamble = '<p class="italic">This post is part of a <a href="/build-a-programming-language/">series</a> about learning Rust and building a small programming language.</p><hr>'
```
Now that the lexer is actually lexing, we can start parsing. This is where the Tree in Abstract Syntax Tree really comes in. What the parser is going to do is take a flat sequence of tokens and transform it into a shape that represents the actual structure of the code.
@ -97,4 +97,3 @@ node: Some(
```
The eagle-eyed may notice that while we have parsed the expression, we have not parsed it correctly. What's missing is operator precedence and associativity, but that will have to wait for next time.

View File

@ -4,7 +4,7 @@ tags = ["build a programming language", "rust"]
date = "2021-04-15 17:00:42 -0400"
short_desc = "A bad calculator."
slug = "evaluation"
preamble = '<p style="font-style: italic;">This post is part of a <a href="/build-a-programming-language/" data-link="/build-a-programming-language/">series</a> about learning Rust and building a small programming language.</p><hr>'
preamble = '<p class="italic">This post is part of a <a href="/build-a-programming-language/">series</a> about learning Rust and building a small programming language.</p><hr>'
```
Last time I said operator precedence was going to be next. Well, if you've read the title, you know that's not the case. I decided I really wanted to see this actually run[^1] some code[^2], so let's do that.
@ -78,4 +78,3 @@ result: Integer(6)
```
Next time, I'll add some more operators and actually get around to operator precedence.

View File

@ -3,7 +3,7 @@ title = "Part 4: Operator Precedence"
tags = ["build a programming language", "rust"]
date = "2021-04-16 17:00:42 -0400"
slug = "operator-precedence"
preamble = '<p style="font-style: italic;">This post is part of a <a href="/build-a-programming-language/" data-link="/build-a-programming-language/">series</a> about learning Rust and building a small programming language.</p><hr>'
preamble = '<p class="italic">This post is part of a <a href="/build-a-programming-language/">series</a> about learning Rust and building a small programming language.</p><hr>'
```
I've gone through the lexer, parser, and evaluator and added subtraction, multiplication, and division in addition to, uh... addition. And they kind of work, but there's one glaring issue that I mentioned back in part 2. It's that the parser has no understanding of operator precedence. That is to say, it doesn't know which operators have a higher priority in the order of operations when implicit grouping is taking place.
@ -189,4 +189,3 @@ fn main() {
$ cargo run
result: Integer(10)
```

View File

@ -4,7 +4,7 @@ tags = ["build a programming language", "rust"]
date = "2021-04-17 17:00:42 -0400"
short_desc = "A small gotcha in Rust's TakeWhile iterator."
slug = "fixing-floats"
preamble = '<p style="font-style: italic;">This post is part of a <a href="/build-a-programming-language/" data-link="/build-a-programming-language/">series</a> about learning Rust and building a small programming language.</p><hr>'
preamble = '<p class="italic">This post is part of a <a href="/build-a-programming-language/">series</a> about learning Rust and building a small programming language.</p><hr>'
```
In the process of adding floating point numbers, I ran into something a little bit unexpected. The issue turned out to be pretty simple, but I thought it was worth mentioning.
@ -84,5 +84,3 @@ fn parse_number<T: Iterator<Item = char>>(it: &mut T) -> Option<Token> {
// ...
}
```

View File

@ -3,7 +3,7 @@ title = "Part 6: Grouping"
tags = ["build a programming language", "rust"]
date = "2021-04-18 14:42:42 -0400"
slug = "grouping"
preamble = '<p style="font-style: italic;">This post is part of a <a href="/build-a-programming-language/" data-link="/build-a-programming-language/">series</a> about learning Rust and building a small programming language.</p><hr>'
preamble = '<p class="italic">This post is part of a <a href="/build-a-programming-language/">series</a> about learning Rust and building a small programming language.</p><hr>'
```
Parsing groups is pretty straightforward, with only one minor pain point to keep in mind. I'll gloss over adding left and right parentheses because it's super easy—just another single character token.
@ -97,4 +97,3 @@ node: Group {
```
(I won't bother discussing evaluating groups because it's trivial.)

View File

@ -4,7 +4,7 @@ tags = ["build a programming language", "rust"]
date = "2021-04-19 17:00:42 -0400"
short_desc = "A minor fight with the Rust borrow checker."
slug = "cleaning-up-binary-operators"
preamble = '<p style="font-style: italic;">This post is part of a <a href="/build-a-programming-language/" data-link="/build-a-programming-language/">series</a> about learning Rust and building a small programming language.</p><hr>'
preamble = '<p class="italic">This post is part of a <a href="/build-a-programming-language/">series</a> about learning Rust and building a small programming language.</p><hr>'
```
The code from [part 4](/2021/operator-precedence/) that checks whether a pair of binary operators should be grouped to the left or right works, but I'm not particularly happy with it. The issue is that it needs to pattern match on the right node twice: first in the `should_group_left` function, and then again in `combine_with_binary_operator` if `should_group_left` returned true.
@ -140,4 +140,3 @@ fn combine_with_binary_operator(left: Node, token: &Token, right: Node) -> Node
}
}
```

View File

@ -3,7 +3,7 @@ title = "Part 8: Variable Lookups and Function Calls"
tags = ["build a programming language", "rust"]
date = "2021-04-25 11:15:42 -0400"
slug = "variable-lookups-and-function-calls"
preamble = '<p style="font-style: italic;">This post is part of a <a href="/build-a-programming-language/" data-link="/build-a-programming-language/">series</a> about learning Rust and building a small programming language.</p><hr>'
preamble = '<p class="italic">This post is part of a <a href="/build-a-programming-language/">series</a> about learning Rust and building a small programming language.</p><hr>'
```
Arithmetic expressions are all well and good, but they don't really feel much like a programming language. To fix that, let's start working on variables and function calls.
@ -127,4 +127,3 @@ Call {
],
}
```

View File

@ -3,7 +3,7 @@ title = "Part 9: Statements"
tags = ["build a programming language", "rust"]
date = "2021-05-03 17:46:42 -0400"
slug = "statements"
preamble = '<p style="font-style: italic;">This post is part of a <a href="/build-a-programming-language/" data-link="/build-a-programming-language/">series</a> about learning Rust and building a small programming language.</p><hr>'
preamble = '<p class="italic">This post is part of a <a href="/build-a-programming-language/">series</a> about learning Rust and building a small programming language.</p><hr>'
```
So the parser can handle a single expression, but since we're not building a Lisp, that's not enough. It needs to handle multiple statements. For context, an expression is a piece of code that represents a value whereas a statement is a piece of code that can be executed but does not result in a value.
@ -93,5 +93,3 @@ statements: [
),
]
```

View File

@ -3,7 +3,7 @@ title = "Part 10: Variable Declarations"
tags = ["build a programming language", "rust"]
date = "2021-05-09 19:14:42 -0400"
slug = "variable-declarations"
preamble = '<p style="font-style: italic;">This post is part of a <a href="/build-a-programming-language/" data-link="/build-a-programming-language/">series</a> about learning Rust and building a small programming language.</p><hr>'
preamble = '<p class="italic">This post is part of a <a href="/build-a-programming-language/">series</a> about learning Rust and building a small programming language.</p><hr>'
```
Now that the parser can handle multiple statements and the usage of variables, let's add the ability to actually declare variables.
@ -110,4 +110,3 @@ Integer(1)
```
[^2]: The `dbg` function is a builtin I added that prints out the Rust version of the `Value` it's passed.

View File

@ -4,7 +4,7 @@ tags = ["build a programming language", "rust"]
date = "2021-06-29 19:14:42 -0400"
short_desc = "Evaluating if statements and dealing with nested scopes."
slug = "lexical-scope"
preamble = '<p style="font-style: italic;">This post is part of a <a href="/build-a-programming-language/" data-link="/build-a-programming-language/">series</a> about learning Rust and building a small programming language.</p><hr>'
preamble = '<p class="italic">This post is part of a <a href="/build-a-programming-language/">series</a> about learning Rust and building a small programming language.</p><hr>'
```
After adding variables, I added boolean values and comparison operators, because why not. With that in place, I figured it would be a good time to add if statements. Parsing them is straightforward—you just look for the `if` keyword, followed by a bunch of stuff—so I won't go into the details. But actually evaluating them was a bit more complicated.
@ -98,4 +98,3 @@ fn main() {
$ cargo run
Integer(1)
```

View File

@ -81,11 +81,10 @@ There's a note in the Network.framework header comments[^2] for `nw_framer_parse
The possibility of a copy being needed to form a contiguous buffer implies that there could be discontiguous data, which lines up with my "chunks" hypothesis and would explain the behavior I observed.
<aside>
<aside class="inline">
Fun fact, the C function corresponding to this Swift API, `nw_framer_parse_input`, takes a maximum length, but it also lets to pass in your own temporary buffer, in the form of a `uint8_t*`. It's therefore up to the caller to ensure that the buffer that's pointed to is at least as long as the maximum length. This seems like a place ripe for buffer overruns in sloppily written protocol framer implementations.
</aside>
Anyhow, if you're interested, you can find the current version of my Gemini client implementation (as of this post) [here](https://git.shadowfacts.net/shadowfacts/Gemini/src/commit/3055cc339fccad99ab064f2daccdb65efa8024c0/GeminiProtocol/GeminiProtocol.swift).

View File

@ -36,7 +36,7 @@ The display on this computer is great. Having had a high-refresh rate external m
If you dont know, Apple laptops starting with the 2016 MacBook Pro have used non-integer scaling factors. That is, by default they ran at point resolutions which were more than half of the pixel resolution in each dimension. So, a software pixel mapped to some fraction of a hardware pixel, meaning everything had to be imprecisely scaled before actually going to the panel. People have been complaining about this for years, and Id always dismissed it because I never observed the issue. But, in hindsight, thats because the vast majority of my laptop usage was with it docked to an external monitor and peripherals. In that scenario, the laptops builtin display ends up physically far enough away from my eyes that I dont perceive any blurriness. But, since I've been using this laptop more as an actual laptop—bringing the screen a good foot or two closer to my eyes—Ive noticed that text is undeniably crisper.
<aside>
<aside class="inline">
Using the screen on this laptop, particularly when using it undocked and independent of an external monitor has firmly convinced me of something I previously believed: the ideal monitor would be 5k (i.e., 2560x1440 at the Retina 2x pixel density), 27" diagonally, and 120Hz. My current external monitor is 1440p, 27", and 144Hz and having used a monitors of that size for years and years, I think it's the best combination of screen real-estate and physical size of UI elements. Using a 5k iMac screen in the office[^1] convinced me that high-DPI is very nice, even if you're just looking at text all day. And finally seeing a screen that is both high DPI and high refresh rate has validated that belief. I really hope that someone makes a monitor that manages to include both.
@ -95,4 +95,3 @@ One of my few complaints about the M1 Mac mini was resolved with the release of
## Conclusion
Overall, this is a fantastic computer. Apple Silicon means it's vastly faster and more efficient than any previous Mac laptop. As with last year, I'm impresed how much software is already native—just a year and a half into the Mac's ARM transition—and how well Rosetta 2 works for software that isn't. Beyond Apple Silicon, this laptop is an upgrade in every single way over the few preceding generations which felt like a big regression. Two laptops ago, I was using the 7.5 year old 2012 Retina MacBook Pro: the first laptop of a new generation of MacBooks. I'm hopeful that with all these long-standing issues resolved, this machine will last a similarly long time.

View File

@ -3,7 +3,7 @@ title = "Part 12: Typed Variables"
tags = ["build a programming language", "rust"]
date = "2022-05-25 16:38:42 -0400"
slug = "typed-variables"
preamble = '<p style="font-style: italic;">This post is part of a <a href="/build-a-programming-language/" data-link="/build-a-programming-language/">series</a> about learning Rust and building a small programming language.</p><hr>'
preamble = '<p class="italic">This post is part of a <a href="/build-a-programming-language/">series</a> about learning Rust and building a small programming language.</p><hr>'
```
Hi. It's been a while. Though the pace of blog posts fell off a cliff last year[^1], I've continued working on my toy programming language on and off.
@ -180,4 +180,3 @@ iteration: 9, a: 34
```
I also added a small CLI using [`structopt`](https://lib.rs/structopt) so I didn't have to keep writing code inside a string in `main.rs`.

View File

@ -70,11 +70,11 @@ return (link, rangeInSelf)
One important thing to note is that the line fragment's `attributedString` property is an entirely separate string from the text view's atttributed string. So the return value of `characterIndex` and the longest effective range have indices into the _substring_. The rest of my code expects the return value to be a range in the index-space of the full string, so I need to convert it by adding the offset between the beginning of the document and the beginning of the line fragment's substring.
For the legacy TextKit 1 path, I use the `characterIndex(for:in:fractionOfDistanceBetweenInsertionPoints:)` method on the layout manager to get the character index and then look up the attribute at that location. I won't go into detail in that code here, since it's more straightforward—and lots of other examples can be found online.
For the legacy TextKit 1 path, I use the `characterIndex(for:in:fractionOfDistanceBetweenInsertionPoints:)` method on the layout manager to get the character index and then look up the attribute at that location. I won't go into detail in that code here, since it's more straightforward—and lots of other examples can be found online.
Next up: context menu previews. The vast majority of the code is unchanged, all that needs to be done is changing how we get the rects spanned by a range in the text.
In the `contextMenuInteraction(_:previewForHighlightingMenuWithConfiguration:)` method, rather than always using the TextKit 1 API, we again check if TextKit 2 is available, and if so, use that:
In the `contextMenuInteraction(_:previewForHighlightingMenuWithConfiguration:)` method, rather than always using the TextKit 1 API, we again check if TextKit 2 is available, and if so, use that:
```swift
var textLineRects = [CGRect]()
@ -110,4 +110,3 @@ With that, we can call `enumerateTextSegments` to get the bounding rectangles of
From there, the code is exactly the same as last time.
And with those changes in place, I can use my app without any warnings about text views falling back to TextKit 1 and the accompanying visual artifacts.

View File

@ -15,26 +15,23 @@ An annotated digest of the top "Hacker" "News" posts for the second week of Augu
<!-- excerpt-end -->
<style>
.article-content {
.body-content {
font-family: 'Comic Sans MS', 'Chalkboard SE', 'Comic Neue', 'VGA' !important;
font-size: 1.1rem !important;
}
h3, h4 {
font-family: 'Comic Sans MS', 'Chalkboard SE', 'Comic Neue', 'VGA' !important;
font-variant: small-caps;
font-variant: small-caps;
margin-bottom: 0;
}
h4 {
margin-top: 0;
}
.article-content a.header-anchor {
.body-content a.header-anchor {
display: none;
}
a::before, a::after {
content: "" !important;
}
.article-content a {
text-decoration: underline !important;
.body-content a::after {
content: none !important;
}
</style>
@ -65,4 +62,3 @@ In which the author finds a series of vulnerabilities that should be embarassing
### [Oasis: Small statically-linked Linux system](https://github.com/oasislinux/oasis)
#### August 14, 2022 [(comments)](https://news.ycombinator.com/item?id=32458744)
Some developers have come up with a Linux (business model: "Uber for FOSS dweebs") distribution that will be even more annoying to use than the usual musl-based ones. Half of Hackernews rails against dynamic linking and the other half rails against static linking. Compromise is on no one's mind; this can only end in war. Only one Hackernews is excited about any other potential merit of the project (namely that it boots a few seconds faster than their current distro of choice).

View File

@ -12,7 +12,7 @@ So, about six months ago I decided I wanted to rewrite my perfectly-working blog
The fundamental architecture of my site is unchanged from the last [rewrite](/2019/reincarnation). All of the HTML pages are generated up front and written to disk. The HTTP server can then handle any ActivityPub-specific requests and fall back to serving files straight from disk.
<blockquote class="pull right">
<blockquote>
i look forward to finishing this rewrite and then being able to sit back and enjoy... *checks notes* the exact same website i had before
</blockquote>

View File

@ -6,11 +6,9 @@ short_desc = ""
slug = "rust-swift"
```
From the person that brought you [calling Rust from Swift](/2022/swift-rust/) comes the thrilling[^1], action[^2]-packed sequel: calling Swift from Rust! For a [recent project](/2023/rewritten-in-rust/), I found myself needing to call into Swift from a Rust project (on both macOS and Linux) and so am documenting here in case you, too, are in this unenviable situation.
From the person that brought you [calling Rust from Swift](/2022/swift-rust/) comes the thrilling, action-packed sequel[^1]: calling Swift from Rust! For a [recent project](/2023/rewritten-in-rust/), I found myself needing to call into Swift from a Rust project (on both macOS and Linux) and so am documenting here in case you, too, are in this unenviable situation.
[^1]: "Thrilling" is here defined as "confounding".
[^2]: Herein, "action" refers to linker errors.
[^1]: "Thrilling" is defined as "confounding". And, herein, "action" refers to linker errors.
<!-- excerpt-end -->
@ -27,9 +25,9 @@ public func highlight(codePtr: UnsafePointer<UInt8>, codeLen: UInt64, htmlLenPtr
}
```
Reading the input is accomplished by turning the base pointer and length into a buffer pointer, turning that into a `Data`, and finally into a `String`. Unfortunately, there are no zero-copy initializers[^3], so this always copies its input. Being in a Rust mindset, I really wanted to get rid of this copy, but there doesn't seem to be an obvious way, and at the end of the day, it's not actually a problem.
Reading the input is accomplished by turning the base pointer and length into a buffer pointer, turning that into a `Data`, and finally into a `String`. Unfortunately, there are no zero-copy initializers[^2], so this always copies its input. Being in a Rust mindset, I really wanted to get rid of this copy, but there doesn't seem to be an obvious way, and at the end of the day, it's not actually a problem.
[^3]: There is a [`bytesNoCopy`](https://developer.apple.com/documentation/swift/string/init(bytesnocopy:length:encoding:freewhendone:)) initializer, but it's deprecated and the documentation notes that Swift doesn't support zero-copy initialization.
[^2]: There is a [`bytesNoCopy`](https://developer.apple.com/documentation/swift/string/init(bytesnocopy:length:encoding:freewhendone:)) initializer, but it's deprecated and the documentation notes that Swift doesn't support zero-copy initialization.
```swift
let buf = UnsafeBufferPointer(start: codePtr, count: Int(codeLen))

View File

@ -29,7 +29,7 @@ So what's standing in the way, and why aren't posts portable already? Well, here
Notice anything about it? The ID of post tells you where it is. However it doesn't just identify where the document can be found; it also identifies where it's _hosted_. This property is true of all object identifiers in Mastodon (and just about every other serivce that implements ActivityPub).
<aside>
<aside class="inline">
An interesting question is why is this the case? Personally, I think the most likely answer is that folks building AP backends have prior web development experience. And outside of decentralized systems, the simplest way of identifying an object from it's URL is using a path parameter. So you get paths that look like `/users/:username/statuses/:status_id`. And then when you need something to use as an AP identifier, you use the whole URL. So dereferencing it ends up being trivial: your server just looks up the object in its database, same as ever. But that's exporting an implementation detail: your primary key (the attribute by which a post is identified in _your_ database) means nothing to me. It unnecessarily ties the post to the server where it originated.
@ -103,7 +103,7 @@ ActivityPub actors already have public/private keypairs and any activities deliv
#### Why not DIDs?
ATProto uses [DIDs](https://www.w3.org/TR/did-core/), rather than URIs, for identifiers. DIDs seem interesting, if quite complicated. The requirement of the ActivityPub spec that identifier URIs' authorities belong to "their originating server" does not _seem_ to preclude using DIDs as AP identifiers. The primary advantage DIDs confer is that they let you migrate between not just hosts/PDS's but usernames: he same underlying DID can be updated to refer to `@my.domain` from `@someone.bsky.social`.
ATProto uses [DIDs](https://www.w3.org/TR/did-core/), rather than URIs, for identifiers. DIDs seem interesting, if quite complicated. The requirement of the ActivityPub spec that identifier URIs' authorities belong to "their originating server" does not _seem_ to preclude using DIDs as AP identifiers. The primary advantage DIDs confer is that they let you migrate between not just hosts/PDS's but usernames: the same underlying DID can be updated to refer to `@my.domain` from `@someone.bsky.social`.
<aside>
@ -113,7 +113,7 @@ ATProto uses [DIDs](https://www.w3.org/TR/did-core/), rather than URIs, for iden
This does solve the caveat mentioned earlier, that the shared identity resolver has to be treated as infrastructure and be above moderation decisions. But, if the goal is to move the existing ecosystem towards portable identity in a reasonably expendient manner—and I believe that is the goal—adopting DIDs in the short term is unnecessary.
#### Moderating Actions Against Hosts
#### Moderation Actions Against Hosts
A very good point brought up in reply to this post was that since, right now, a domain/host/instance are all one and the same, they serve as a very useful target for moderation actions, but portable identity seems to interfere with that. If the moderators of a certain instance condone bad behavior from one person, another instance can take action against that entire instance, rather than just the individual, on the reasonable assumption the moderators will permit similar behavior from other people. But adding the layer of indirection I described makes it much harder to take such actions. Where as now it's clear that `@alice@example.com` and `@bob@example.com` are hosted at the same place, if they used their own domains—say, `@alice@alices.place` and `@bob@bob.online`—it's no longer self-evident that they're hosted, and thus moderated, at the same place.

View File

@ -154,7 +154,7 @@ So, when put all together, there will be three layers which are (back to front):
width: 200px;
height: 400px;
position: absolute;
border: 1px dashed var(--ui-text-color);
border: 1px dashed black;
}
#layer-container > #red {
background-color: rgba(255, 0, 0, 0.4);
@ -215,7 +215,7 @@ It's not clear at the moment why we need two separate hosting controllers, rathe
The first step in building the actual effect we're after is collecting all of the views we want to use as sources as well as their geometries. The views themselves are necessary in addition to the frames because, unlike SwiftUI[^2], we're displaying the matched views outside of their original position in the view tree.
[^2]: If you want to convince yourself that SwiftUI works by moving the matched views in-place, try playing around with the other of the `clipped` and `matchedGeometryEffect` modifiers on the same view.
[^2]: If you want to convince yourself that SwiftUI works by moving the matched views in-place, try playing around with the order of the `clipped` and `matchedGeometryEffect` modifiers on the same view.
To send this information up through the view tree, we'll use a custom preference. The value of the preference will be a dictionary which maps the matched geometry's ID to a tuple of an `AnyView` and a `CGRect`. The view is the type-erased view that's being matched, and the rect is the frame of the source view. The important part of the preference key is the reducer which, rather than simply overwriting the current value, merges it with the new one. This means that, if there are multiple matched geometry sources in the view tree, reading the preference from higher up in the tree will give us access to _all_ of the sources.
@ -882,4 +882,4 @@ Because of the way we've implemented the `$destinations` publisher workaround,
## Conclusion
Overall, I'm very happy with how this implementaiton turned out. I won't claim it's straightforward, but I think it's relatively un-hacky for what it's doing and has been very reliable in my testing. And, if you've got the latest Tusker release, you're already running this code.
Overall, I'm very happy with how this implementation turned out. I won't claim it's straightforward, but I think it's relatively un-hacky for what it's doing and has been very reliable in my testing. And, if you've got the latest Tusker release, you're already running this code.

View File

@ -155,7 +155,7 @@ With the diagram converted to code (and several intervening iterations of notici
The code by itself is nigh-impossible to reason about, which is why it bears the warning "This is not a place of honor. No highly esteeme—" er. I mean, which is why it bears the exhortation:
```swift
// DO NOT TOUCH THE CODE WITHOUT CHECKING/UPDATING THE DIAGRAM.
// DO NOT TOUCH THE CODE WITHOUT CHECKING/UPDATING THE DIAGRAM.
```
Speaking of the diagram: in an effort to preserve the sanity of future-me, I turned my chicken-scratch drawing into a GraphViz file that I could stick in version control. Here's the rendered graph, in all its splendor (please write in with suggestions about how to make this not look like GraphViz is questioning its life choices):
@ -171,4 +171,4 @@ In writing this blog post I ran into ~~1~~ 2 more edge cases that I had not hand
While I don't regret rewriting the HTML parsing/conversion code I had before, it does feel rather like having opened Pandora's box.
And back to what I alluded to at the beginning, all this extra bookkeeping means the end-to-end HTML &rarr; attributed string conversion performance is now merely 2.3&times; faster than the old SwiftSoup-based implemention (as opposed to 2.7&times; faster before this whole state machine adventure).
And back to what I alluded to at the beginning, all this extra bookkeeping means the end-to-end HTML &rarr; attributed string conversion performance is now merely 2.3&times; faster than the old SwiftSoup-based implemention (as opposed to 2.7&times; faster before this whole state machine adventure).

View File

@ -260,10 +260,6 @@ struct CharacterSetAccumulator {
}
impl CharacterSetAccumulator {
fn new() -> Self {
assert_eq!(FontKey::BOLD.bits().trailing_zeros(), 0);
assert_eq!(FontKey::ITALIC.bits().trailing_zeros(), 1);
assert_eq!(FontKey::MONOSPACE.bits().trailing_zeros(), 2);
Self {
characters: CharacterSets::default(),
keys: FontKeyStack::default(),
@ -295,8 +291,11 @@ impl CharacterSetAccumulator {
FontKey::BOLD
} else if tag.name == local_name!("em")
|| tag.name == local_name!("i")
|| tag.name == local_name!("blockquote")
|| tag.name == local_name!("figcaption")
|| tag.name == local_name!("header")
|| tag.attrs.iter().any(Self::is_hl_cmt)
|| tag.name == local_name!("footer")
|| tag.attrs.iter().any(Self::is_italic)
{
FontKey::ITALIC
} else if tag.name == local_name!("code") {
@ -315,12 +314,12 @@ impl CharacterSetAccumulator {
|| tag.name == local_name!("h6")
}
fn is_hl_cmt(attr: &Attribute) -> bool {
fn is_italic(attr: &Attribute) -> bool {
attr.name.prefix == None
&& attr.name.local == local_name!("class")
// this is a bit of a kludge for performance; the hl-cmt class is only
// ever used by itself, so we don't try to parse the attr value
&& attr.value == "hl-cmt".into()
&& (attr.value == "hl-cmt".into() || attr.value == "italic".into())
}
}
impl TokenSink for CharacterSetAccumulator {