discuss@lists.openscad.org

OpenSCAD general discussion Mailing-list

View all threads

Re: points list from stl

JB
Jordan Brown
Tue, Oct 25, 2022 5:30 PM

On 10/25/2022 6:55 AM, Roger Whiteley via Discuss wrote:

I use an online STL -> points convertor which creates an array,
loading this into the editor results in a VERY slow editor.

Somebody else mentioned that problem and said that it was tied to having
very long lines.  You might try wrapping the data and seeing if the
editor is less pathological then.

(But it could also just be that the editor doesn't like super-big
files.  It is, after all, primarily intended for human-scale files.)

On 10/25/2022 6:55 AM, Roger Whiteley via Discuss wrote: > I use an online STL -> points convertor which creates an array, > loading this into the editor results in a VERY slow editor. Somebody else mentioned that problem and said that it was tied to having very long lines.  You might try wrapping the data and seeing if the editor is less pathological then. (But it could also just be that the editor doesn't like super-big files.  It is, after all, primarily intended for human-scale files.)
RW
Rogier Wolff
Wed, Oct 26, 2022 8:51 AM

On Tue, Oct 25, 2022 at 10:30:32AM -0700, Jordan Brown wrote:

(But it could also just be that the editor doesn't like super-big
files.  It is, after all, primarily intended for human-scale files.)

I was interviewed by G****e at one point in time.

  • You have a database of 100-1000 items and you need to access them a
    lot given just one key. How do you program that?

-> I'd use a hash. Just in case things get bigger later on.

  • OK. Now the list grows to 1M items, what do you do?

-> 1 million items still fits in RAM on a decent machine. Should be
plenty fast.

  • Things grow to 100M items, and one machine is not enough.

-> Distribute things among multiple machines. Use the hash to determine
which machine to use.

That's the way you should program things. Sure in a "quick and dirty"
situation you might just grep through the list. But when things just
MIGHT get big, it is always possible to code things in a way that it
doesn't get slow.

For an editor: The data that the user is involved in is about the 2k
that fits on a page. When properly programmed it shouldn't matter that
there are multiple megabytes "out of view". Handling 2k or maybe the
surrounding 10k or 1M. A modern computer should comfortably handle
that...

When it gets slow with moderately large files (I'm guessing not
multi-gigabyte files) then there is an unnecessary bottleneck.

Roger. 

--
** R.E.Wolff@BitWizard.nl ** https://www.BitWizard.nl/ ** +31-15-2049110 **
**    Delftechpark 11 2628 XJ  Delft, The Netherlands.  KVK: 27239233    **
f equals m times a. When your f is steady, and your m is going down
your a is going up.  -- Chris Hadfield about flying up the space shuttle.

On Tue, Oct 25, 2022 at 10:30:32AM -0700, Jordan Brown wrote: > (But it could also just be that the editor doesn't like super-big > files.  It is, after all, primarily intended for human-scale files.) I was interviewed by G****e at one point in time. * You have a database of 100-1000 items and you need to access them a lot given just one key. How do you program that? -> I'd use a hash. Just in case things get bigger later on. * OK. Now the list grows to 1M items, what do you do? -> 1 million items still fits in RAM on a decent machine. Should be plenty fast. * Things grow to 100M items, and one machine is not enough. -> Distribute things among multiple machines. Use the hash to determine which machine to use. That's the way you should program things. Sure in a "quick and dirty" situation you might just grep through the list. But when things just MIGHT get big, it is always possible to code things in a way that it doesn't get slow. For an editor: The data that the user is involved in is about the 2k that fits on a page. When properly programmed it shouldn't matter that there are multiple megabytes "out of view". Handling 2k or maybe the surrounding 10k or 1M. A modern computer should comfortably handle that... When it gets slow with moderately large files (I'm guessing not multi-gigabyte files) then there is an unnecessary bottleneck. Roger. -- ** R.E.Wolff@BitWizard.nl ** https://www.BitWizard.nl/ ** +31-15-2049110 ** ** Delftechpark 11 2628 XJ Delft, The Netherlands. KVK: 27239233 ** f equals m times a. When your f is steady, and your m is going down your a is going up. -- Chris Hadfield about flying up the space shuttle.
JB
Jordan Brown
Wed, Oct 26, 2022 7:01 PM

On 10/26/2022 1:51 AM, Rogier Wolff wrote:

For an editor: The data that the user is involved in is about the 2k
that fits on a page. When properly programmed it shouldn't matter that
there are multiple megabytes "out of view". Handling 2k or maybe the
surrounding 10k or 1M. A modern computer should comfortably handle
that...

When it gets slow with moderately large files (I'm guessing not
multi-gigabyte files) then there is an unnecessary bottleneck.

No doubt.

But the editor, Qscintilla https://www.scintilla.org/, is an
externally-sourced component, not part of OpenSCAD proper.

On 10/26/2022 1:51 AM, Rogier Wolff wrote: > For an editor: The data that the user is involved in is about the 2k > that fits on a page. When properly programmed it shouldn't matter that > there are multiple megabytes "out of view". Handling 2k or maybe the > surrounding 10k or 1M. A modern computer should comfortably handle > that... > > When it gets slow with moderately large files (I'm guessing not > multi-gigabyte files) then there is an unnecessary bottleneck. > No doubt. But the editor, Qscintilla <https://www.scintilla.org/>, is an externally-sourced component, not part of OpenSCAD proper.