Seems one of the main reasons is to use Rust’s thread safety to enable “concurrent mode”. Anyone with the knowledge able to explain what advantages that would yield for an end fish user?
PL can have a large impact on features, bugs, bug reports, troubleshooting, performance and documentation. Particularly when dev resources are limited.
It’s hard to see how this opinion holds any water.
Rust is a great choice for a shell built as an interactive shell that doesn’t have to be core to the OS. Over C++ this also makes development more accessible to young programmers.
While I agree, most people shouldn’t have to be concerned with it, you can’t deny the resource impacts of various languages, libraries and frameworks, like compare the memory usage of Discord or Teams with those of FOSS chat applications, and you’ll notice those two consistently eating much more memory. You can also compare compute speeds of a higher level language like Python vs lower level languages like Rust and you’ll find that Rust is quite a bit faster (though generally takes more dev time). So yes, users shouldn’t have to be concerned with involved languages, but if you’re running something on a low-resource device, such as a Raspberry Pi, those little details can make all the difference.
One big, long-standing issue is that fish can’t run builtins, blocks or functions in the background or at the same time.
That means a pipeline like
seq 1 5 | while read -l line
echo line; sleep 0.1;
end | while read -l line
echo line; sleep 0.1
end
will have to wait for the first while loop to complete, which takes 0.5s, and then run the second.
So it takes 0.5s until you get the first output and a full second until you get all of it.
Making this concurrent means you get the first line immediately and all of it in 0.5s.
While this is an egregious example, it makes all builtin | builtin pipelines slower.
Other shells solve this via subshells - they fork off a process for the middle part of the pipeline at least. That has some downsides in that it’s annoyingly leaky - you can’t set variables or create a background job in those sections and then wait for them outside, because it’s a new process and so the outer shell never sees them.
Seems one of the main reasons is to use Rust’s thread safety to enable “concurrent mode”. Anyone with the knowledge able to explain what advantages that would yield for an end fish user?
Here’s one issue they hope to solve with this rewrite: https://github.com/fish-shell/fish-shell/issues/238
End user shouldn’t care what PL the software is written in. Their advantages and disadvantages are meaningful for developers only.
PL can have a large impact on features, bugs, bug reports, troubleshooting, performance and documentation. Particularly when dev resources are limited.
It’s hard to see how this opinion holds any water.
Rust is a great choice for a shell built as an interactive shell that doesn’t have to be core to the OS. Over C++ this also makes development more accessible to young programmers.
Except they affect the end result.
While I agree, most people shouldn’t have to be concerned with it, you can’t deny the resource impacts of various languages, libraries and frameworks, like compare the memory usage of Discord or Teams with those of FOSS chat applications, and you’ll notice those two consistently eating much more memory. You can also compare compute speeds of a higher level language like Python vs lower level languages like Rust and you’ll find that Rust is quite a bit faster (though generally takes more dev time). So yes, users shouldn’t have to be concerned with involved languages, but if you’re running something on a low-resource device, such as a Raspberry Pi, those little details can make all the difference.
One big, long-standing issue is that fish can’t run builtins, blocks or functions in the background or at the same time.
That means a pipeline like
will have to wait for the first while loop to complete, which takes 0.5s, and then run the second.
So it takes 0.5s until you get the first output and a full second until you get all of it.
Making this concurrent means you get the first line immediately and all of it in 0.5s.
While this is an egregious example, it makes all
builtin | builtin
pipelines slower.Other shells solve this via subshells - they fork off a process for the middle part of the pipeline at least. That has some downsides in that it’s annoyingly leaky - you can’t set variables or create a background job in those sections and then wait for them outside, because it’s a new process and so the outer shell never sees them.