View Single Post
  #48  
Old August 11th 18, 05:52 PM posted to alt.comp.freeware,alt.comp.os.windows-10
Paul[_32_]
external usenet poster
 
Posts: 11,873
Default Sort files by aspect ratio?

Terry Pinnell wrote:


A very useful learning exercise, thanks. I now know how to run a
copy/pasted PS1 script. But not a practical route to a solution, as it
delivers only a CSV file, not a folder of selected or renamed files.

BeAr's recommended solution, Dimensions2Folders, is the one I've started
using. But it will also be interesting to see the AWK/GAWK script
Reinhard is developing.

Terry, East Grinstead, UK


What you were supposed to learn from the exercise, is
you can "scan" the entire C: in a couple of seconds
and get a CSV with all the image sizes.

Now, feed the CSV into some other scripting language.

This is intended to reduce the initial scan time,
not the curated copy time.

The fun part, would be seeing if gawk can fork a
powershell subshell to run that script.

The file copy time would be a fixed overhead for
all participants and their code. You can't make
that part go faster. There are some optimizations
you could attempt, but they would be expensive
in programming time, and non-scalable. If your
tree of things to copy was too large, maybe
an optimization wouldn't work right. If you have
an optimization in time, it should work properly
no matter what the total quantity of files is,
to be considered a "win". (For example, using
a RAMDisk to hold a temporary copy, would be defeated
if the file set was larger than the RAMDisk. As
an example of an attempt at an optimization.)

*******

The only purpose of showing you that demo, is
to demonstrate that the three hours you waited
for Search Indexer to keep Windows.edb up to
date wasn't wasted. For any image format that
defined a Width and Height, you can tap into
Windows.edb and get the information. Instead
of "walking" the file tree and doing it
manually. The Search Indexer also keeps the
index up to date (mostly) in real time. It's
not like doing a scan of the file tree has
an advantage of giving the most advanced
view of the tree. Search Indexer is working
on that, constantly.

If you dump a million files onto C: and then
run that script, well obviously, the indexing
won't be done by then. This concept is
mainly intended for "stable data silos"
where the content you're searching against,
is collected a little bit at a time. Then
later you want to search your corpus of
work and find just the right item. If you're
tossing crap onto a scratch disk, then wanting
to search the scratch disk, one of the other
programs would work better for that.

Paul
Ads