General Category > News

Version 1.0.0 released

(1/4) > >>

Admin:
+ New tag editor (results tab)
 + Some improvements in comparing algorithm
 + Some visual improvements
 - Many small visual/translation bugs fixed

Release version

If you interesting in translating Similarity in your language write to support email.

emn13:
Interesting date there for the 1.0 release ;-).

Good work, though!

ferbaena:
version 0936 used the 4 cores and 8 threads on I7 860

the new version 1.0 barely uses 4 threads and the performance decrease is noticeable

the edit option is nice though

keep up the good work

Admin:
ferbaena
yes, this is bug, used only single thread, download new version.
thanks for your message.

ferbaena:
Yes new 101 now behaves like the 0936, all cores, all threads.


Problem:

I know for a fact that there are two duplicate files on two different

folders (actually there are more but this example is with two).

I create a new folder and make copies of the two duplicate files on that folder.

I run Similarity (content only) on this new folder with different settings
but it is only when content is down to 0.65 that it detects the duplicates.

Now the big problem:

I have around 72.000 mp3 to scan  The cache is 74150

If I set the content at 0.65 there will be more than a couple of million
duplicates by the end of the scan (rigth now I tried scanning 29404 and it has found 400016 duplicates, having checked only 7462) and the experimental algorithm will take a couple of days to finish.

Now the question:

Why it takes a setting of 0.65 to find these duplicates now, if running the program before with content settings between 0.85 and 0.95 found the majority of the others?

I know it is difficult and it's not a Similarity-only problem.

I bought a license for Phelix from Phonome Labs a couple of years ago and it does not find all of the duplicates either.

Thank you

Navigation

[0] Message Index

[#] Next page

Go to full version