Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature request: text selection #1

Open
trusktr opened this issue May 3, 2020 · 8 comments
Open

Feature request: text selection #1

trusktr opened this issue May 3, 2020 · 8 comments
Labels
feature request New feature or request

Comments

@trusktr
Copy link
Contributor

trusktr commented May 3, 2020

First, nice project!

I know this may be a ways off, but text selection with copy/paste would be awesome!

@felixmariotto felixmariotto added the feature request New feature or request label May 4, 2020
@felixmariotto
Copy link
Owner

Hi, thank you for your feedback !

Yes text selection would be nice to have, although there will be a couple milestones before to get there.
I have two questions :

  • What do you think the user experience of copying and pasting should look like in immersive VR with remotes ? Something like on mobile, with a popup offering to copy after selecting a word ?
  • Do you have a particular use case in mind ? (just out of curiosity)

I add that to the roadmap.

@trusktr
Copy link
Contributor Author

trusktr commented May 14, 2020

  • First I think the mesh UI should not be dependent on AR/VR, but usable in any app. f.e. someone maybe wants to make 2D UI (or 2.5D UI) in WebGL.
  • Based on that, I think it be great to start with normal mouse/keyboard support, then finally re-use some of that infrastructure for the AR/VR interaction

In my particular use case, I'm making a general purpose HTML lib (custom elements) over at http://lume.io, which can currently mix DOM/CSS with WebGL (see the "Buttons with shadow" example, and right-click and inspect the buttons to see the 3D markup).

WebGL rendering is powered by Three.js under the hood currently. It supports a WebGL-only rendering mode by starting a scene with <i-scene disable-css experiment-webgl>, but in that mode I don't have much in the way of UI yet (see the material-texture or obj-model examples for WebGL stuff).

In general UIs built with HTML but rendering in WebGL is a feature goal, but for now the features it supports are a subset of built-in Three.js features.

I am interested to help here with three-mesh-ui, at least with some ideas. I could use it as a lower level tool to expose the abilities in the higher-level HTML interfaces in Lume. I think that'd be a neat collaboration!

@felixmariotto
Copy link
Owner

In my particular use case, I'm making a general purpose HTML lib (custom elements) over at http://lume.io, which can currently mix DOM/CSS with WebGL (see the "Buttons with shadow" example, and right-click and inspect the buttons to see the 3D markup).

This looks neat ! I'd be honored if you use three-mesh-ui as a low-level tool for you lib.

I just added support for MSDF fonts for efficient large text rendering, I think it could be of some use for your purpose. It's built-in, it doesn't depend on this mess.

About support for selection, I'd advise you to wait just a bit more, since I will shortly add support for a third type of text : InstancedText. It will be based on InstancedMesh and mainly address the need of rendering fast-updating text, like time counters, loading logs... Once it's done, Texts components should be stable and I will focus on making more examples and ready-made UI components.

Since you're interested about the interaction between controls and UI, I would be interested in your opinion on something. kalegd filed a PR that extracts the raycasting logic from VRControl and put everything in the button example. The original goal was to make the lib more user-friendly, as discussed here, but now I think the example file looks very clumsy... What are your thought on this ? Do you think three-mesh-ui should provide controls components ? Or should it describe in examples how to use raw three.js to make controls, like in kalegd's PR ?

@trusktr
Copy link
Contributor Author

trusktr commented May 18, 2020

a PR that extracts the raycasting logic from VRControl

I haven't inspected the change, but if it decouples that stuff from VR, it sounds good because not everyone wants to make a VR app. The more usable the bits and pieces are in any WebGL app, the better.

The original goal was to make the lib more user-friendly, as discussed here

I responded there.

What are your thought on this ? Do you think three-mesh-ui should provide controls components ? Or should it describe in examples how to use raw three.js to make controls, like in kalegd's PR?

I would first try to make new classes that extend from Three controls classes. If there turns out to be something that is not possible to do with class extension, then I'd open an issue about it on Three.js and make a new control and in the mean time would copy the Three controls class into the lib and make the modifications to it, then extend the new class on top of the modified base class.

From my experience with that sort of thing, I like to re-use as much of Three as possible without modifying it (or modifying as minimal as possible) before extending it, which makes it easier to migrate to new versions of Three later.

@trusktr
Copy link
Contributor Author

trusktr commented May 19, 2020

This is really interesting, see the "Textor" demo here: https://www.lutzroeder.com/web.

It is a canvas 2d text editor. We could use a very similar trick, with a hidden <textarea> element, to capture all the text manipulation actions, and map them onto the WebGL rendering.
That also may give some ideas for a CanvasText class.

@trusktr
Copy link
Contributor Author

trusktr commented May 19, 2020

The Zebra canvas UI also has its own text rendering and editing. Might be able to borrow some ideas from there.

Scroll down to see the Google Maps, looks like it is re-made entirely in canvas 2d. EDIT: nevermind, it's just Zebra UI wrapping around the HTML-based Google Maps.

@felixmariotto
Copy link
Owner

As mentioned in #13, CanvasText is interesting and should be added to the text classes after refactoring.

However, this one is a bit special... All the other text classes (geometryText, MSDFText, InstancedText, and VectorText) can be boiled down to individual glyphs. it's possible to use a common module (let's call it textFormater) to position glyphs in a container, and call the right methods to render the text according the the user choice ( if (this.textType == 'msdf) this.createMSDFGlyph ).

Can you think of any way of inserting CanvasText beautifully in this organisation ?

@trusktr
Copy link
Contributor Author

trusktr commented May 20, 2020

Can you think of any way of inserting CanvasText beautifully in this organisation ?

Still learning the code base, but I'll keep these things in mind for later once I understand everything more.

s-aradachi-unext pushed a commit to s-aradachi-unext/three-mesh-ui that referenced this issue Jul 3, 2023
s-aradachi-unext added a commit to s-aradachi-unext/three-mesh-ui that referenced this issue Jul 26, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants