General Category > General

Similarity seems not optimized for 300,000+ mp3 files

<< < (2/2)

ektorbarajas:
Thanks for the reply. I really appreciate that.

But isn't the cache supposed to minimize the time for a second scan? (That is exactly as the first scan)

As mentioned before, I did some test doing the same exact comparison, the first time the cache increased because it added new files, but the second time it took the same time as when the cache was created.

Admin:
Yes, indeed cache helps to skip decoding and preproceesing file procedure, but the time for this procedure is linear, ie calculated only once per file.
Here example, just pretend we have more realistic very fast computer what can prepare 10 caches in 1 sec and compare 100K fingerprints in 1sec.
N[(N+1)*N/2 / 1000000 / 3600][N * 10 / 3600]% preparing time10000 files0,14 hours0,28 hours66,66 %100000 files13,89 hours2,78 hours16,67 %300000 files125,00 hours8,33 hours6,25 %1000000 files1388,90 hours27,78 hours1,96 %You see for larger files amount caching importance is decreasing.
This calculation is idealistic without duration skip mechanism (disabled).

ektorbarajas:
Thanks for the explanation!

Will look forward for the next update and also for the next algorithm implementation.

Regards

Navigation

[0] Message Index

[*] Previous page

Reply

Go to full version