• Rade0nfighter@lemmy.world
    link
    fedilink
    arrow-up
    44
    ·
    7 months ago

    Finally a proper vertical tabs option!

    Personal preference ofc but after trying it on a whim I can’t go back.

    • GenderNeutralBro@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      19
      ·
      7 months ago

      Tree Style Tabs, Sidebery, or similar are must-haves. I do try to clean up my tabs regularly, but I almost always have more open that can conveniently be displayed in a horizontal tab bar.

      Vertical screen real estate is at a higher premium in general on desktop, anyway. No point in keeping my browser at the full width of my screen when most sites adamantly refuse to use the space anyway (case in point: Lemmy).

    • chicken@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      39
      ·
      7 months ago

      If it’s using a local model like it says I think this is fine:

      We’re looking at how we can use local, on-device AI models – i.e., more private – to enhance your browsing experience further. One feature we’re starting with next quarter is AI-generated alt-text for images inserted into PDFs, which makes it more accessible to visually impaired users and people with learning disabilities.

    • Carighan Maconar@lemmy.world
      link
      fedilink
      arrow-up
      22
      ·
      7 months ago

      Did you read more than just the three words naming the feature? Their use case is actually smart, and could potentially help users a lot.

    • Chewy@discuss.tchncs.de
      link
      fedilink
      arrow-up
      7
      ·
      edit-2
      7 months ago

      The recent addition of local in-browser website translation is an awesome feature I’ve missed for many years. The only alternatives I’ve found previously were either paid or Google Translate plugins. This translation feature is an example of an AI feature they’ve added.

          • LWD@lemm.ee
            link
            fedilink
            arrow-up
            3
            ·
            7 months ago

            I hate to break it to you, but right now, AI is being pushed by Tesla, Microsoft, Apple, Google. Pretty much every major megacorporation.

            The environmental impacts are staggeringly horrible.

            But sure, AI good.

      • akilou@sh.itjust.works
        link
        fedilink
        arrow-up
        3
        ·
        7 months ago

        MS Word has this feature and it’s absolutely terrible. 508-compliant alt-text, which were required to include in documents we publish at work, require a couple of sentences of explanation. Word uses like 3 words to describe an image.

    • GenderNeutralBro@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      18
      ·
      7 months ago

      A well-implemented language model could be a huge QOL improvement. The fact that 90% of AI implementations are half-assed ChatGPT frontends does not reflect the utility of the models themselves; it only reflects the lack of vision and haste to ship of most companies.

      Arc Browser has some interesting AI features, but since they’re shipping everything to OpenAI for processing, it’s a non-starter for me. It also means the developers’ interests are not aligned with my own, since they are paying by the token for API access. Mozilla is going to run local LLMs, so everything will remain private, and limited only by my own hardware and my own configuration choices. I’m down with that.

      I’d love to see Firefox auto-fetch results from web searches and summarize them for me, bypassing clickbait, filler, etc. You’ve probably seen AI summary bots here on Lemmy, and I find them very helpful at cutting the crap and giving me exactly what I want, which is information in text form. That’s something that’s harder and harder to get from web sites nowadays. Never see a recipe writer’s life story again!

    • ebits21@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 months ago

      Don’t be ridiculous…. Now an AI powered bidet that really gets the shit off your butthole…

  • snek_boi@lemmy.ml
    link
    fedilink
    arrow-up
    17
    ·
    edit-2
    7 months ago

    I can’t see how AI can’t be done in a privacy-respecting way [edit: note the double negative there]. The problem that worries me is performance. I have used texto-to-speech AI and it absolutely destroys my poor processors. I really hope there’s an efficient way of adding alt text, or of turning the feature off for users who don’t need it.

    • MangoPenguin@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      21
      ·
      7 months ago

      If it runs locally then no data ever leaves your system, so privacy would be respected. There are tons of good local-only LLMs out there right now.

      As far as performance goes, current x86 CPUs are awful, but stuff coming out from ARM and likely from Intel/AMD in the future will be much better at running this stuff.

    • Even_Adder@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      29
      ·
      edit-2
      7 months ago

      It’s local. You’re not sending data to their servers.

      We’re looking at how we can use local, on-device AI models – i.e., more private – to enhance your browsing experience further. One feature we’re starting with next quarter is AI-generated alt-text for images inserted into PDFs, which makes it more accessible to visually impaired users and people with learning disabilities. The alt text is then processed on your device and saved locally instead of cloud services, ensuring that enhancements like these are done with your privacy in mind.

      At least use the whole quote.

    • GenderNeutralBro@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      5
      ·
      7 months ago

      That’s somewhat awkward phrasing but I think the visual processing will also be done on-device. There are a few small multimodal models out there. Mozilla’s llamafile project includes multimodal support, so you can query a language model about the contents of an image.

      Even just a few months ago I would have thought this was not viable, but the newer models are game-changingly good at very small sizes. Small enough to run on any decent laptop or even a phone.

  • 0xb@lemm.ee
    link
    fedilink
    arrow-up
    3
    ·
    7 months ago

    They shouldn’t have announced until it was ready for release.

    I just bought my first ultrawide monitor and was just getting into testing the available options for vertical tabs, but I don’t like having them duplicated, so I was just thinking that this should be a native feature.

    And now I can’t wait.

  • Sam_Bass@lemmy.ml
    link
    fedilink
    arrow-up
    3
    ·
    7 months ago

    Ugh. I hate tabs on mobile. Had to trash my homepage settings with chrome to stop it loading a new tab everytime i opened the app. Firefox is bloated enough w/o the tab bullshit

      • 𝘋𝘪𝘳𝘬@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        7 months ago

        Yes, I mean that’s one of the points explicitly listed, no?

        The only mention of Wayland I can find on the linked page is a comment mentioning issues with Wayland in Chromium.