Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Springdream

Pages: [1] 2 3
1
Bugs / Re: Slow scanning even with cache
« on: September 19, 2018, 17:37:37 »
I guess you have simply too less RAM.
Consider 8GB+ for your collection...

2
General / Re: Similarity seems not optimized for 300,000+ mp3 files
« on: September 18, 2018, 08:50:34 »
I do also often use grouping with a similar usecase than described at the beginning:

As soon as you use grouping not every item needs to be compared with all others but only each item in group e.g. 1 needs to be compared with group 2.
=> it should be linear, isn't it?

A further improvement might be: as soon one match is found (often that meand there is one song already double) further comparing could be stopped for that items as it is not neccessary to know that there are more than one duplicates...

Also I notice that count goes up to number itema Group 1+2. Group 2 will be the one that can be deleted with automarked files than the count should only go up to number items of group 2?!

Best, Fred

3
Hello,
currently I get all my songs from radio. I use streamwriter to continuously write e.g. 40 channels (italian, frensh,top100...) to disk. Within a week you get 100.000 new songs or more.

Now, the main challange when it comes to delete duplicates is to delete files that have somebody talking at the beginning or there is another song not yet finished.

=> a new criteria that indicates the correct beginning of a song is helpful, but I am not sure if it is possible to get the information from e.g. Musicbrainz if a song starts later (because of song before) than at 0seconds. Often the song is identified properly even if it begins e.g. 20sec later.

=> an option without internet information could be (it's not done elsewere! so it could be something new): list the delay of all found matches. The file with the smallest delay could be the original (without speach at the beginning) or the file with the least disturbance.

At the end that would allow to retrieve high quality songs even from radio station recordings..

Best,
Fred

4
maybe you should spend a bit more time in it... as it is the only tool in the market that EASILY does what is required for removing duplicates...
use precise algorithm >83%
then analyze all to tetect quality which is a mixture of clipping bitrate max frequency ect.
finally set a rule for automark all
and then delete marked files...

5
Bugs / Re: Slow scanning even with cache
« on: October 15, 2014, 18:56:24 »
The slowing down also applies to me. First some 1000 are quite slow, then at let's say 10000 its 1 song per second.
I guess similarity has to compare each file against each other  meaning time effort is proportional to faculity of n (N!)?

I also have the feeling that the cache does not speed up too much.

ALSO another question on the cache:
1) does it slow down if there are too many item in the cache that do not exist anymore? AND
2) if so maybe a function like "delete cache items that do not exist anymore" would be nice
3) I have the feeling that the cache is relating to absolute paths of the files. Everytime there is a changed folder name all files below that folder are lost in the cache. Maybe the cache could refer to some other file properties like a hash, checksum ect. or name, size and date?

6
Bugs / Re: Slow scanning, crashes, etc
« on: October 15, 2014, 18:45:28 »
could it be that the algorithm to compare the songs is slowing down the progress and not the calculation of the fingerprint? also for >100000 songs in the cache the cache seems to get ineffective?!

7
News / Re: Beta version 1.9.2
« on: October 15, 2014, 18:40:56 »
I also would like to try the new method you mentioned. When do you think will you make it available to the public?

8
News / Re: Beta version 1.9.2
« on: October 15, 2014, 18:39:50 »
Hello,
one suggestion regarding the time estimatin (which I like): for big selections files and folders take quite a long time to be scanned completely. Till then the estimate makes no sense. Maybe the scan could be finished completely in the beginning..?!
Regards...

9
General / Re: Open CL and NVIDIA CUDA - GeForce 8600GT
« on: October 12, 2014, 16:35:14 »
could resolve it by uning quite old driver for my n460gtx

10
Bugs / Re: Slow scanning, crashes, etc
« on: October 12, 2014, 13:14:27 »
The initial effect of slowing down also applies to me using 191 192 beta...
Any solution?

11
General / Re: Open CL and NVIDIA CUDA - GeForce 8600GT
« on: October 12, 2014, 13:02:36 »
Hello,
I have the same issues on my GTX460.
Although the log says:

2014-10-12 11:43:06   OpenCL: NVIDIA CUDA - GeForce GTX 460
2014-10-12 11:43:06   OpenCL: GeForce GTX 460 version 340.52 (OpenCL 1.1 CUDA)
2014-10-12 11:43:06   OpenCL: Work group size 64
2014-10-12 11:43:06   OpenCL ready...

When I do the benchmark I get
Calculation error or device busy

The log repeatedly says

2014-10-12 11:44:05   COpenCL::Process:416 failed [false]
2014-10-12 11:44:07   COpenCL::Process:416 failed [false]
2014-10-12 11:44:15   COpenCL::Process:416 failed [false]
2014-10-12 11:44:25   COpenCL::Process:416 failed [false]
2014-10-12 11:45:43   System error [997]
2014-10-12 11:45:43   COpenCL::Process:416 failed [false]
2014-10-12 11:45:43   System error [15100]
2014-10-12 11:45:43   CAudioComparer::BlockCL:1035 failed [false]
2014-10-12 11:45:43   System error [15105]
2014-10-12 11:45:43   CAudioComparer::ProcessCL:1124 failed [false]
2014-10-12 11:45:43   OpenCL use failed
2014-10-12 11:45:43   System error [997]
2014-10-12 11:45:43   COpenCL::Process:416 failed [false]

I have no idea of Similarity finally uses now OpenCL or not...

Any solution?

I am using Win7 Prof 64 bit



12
Wishlist / Link to duplicate when deleting
« on: January 04, 2013, 17:26:07 »
to keep album information complete although duplicates are deleted
- add a textfile with a link to the original and/or
- a link to the original

13
I just worked with Jaikoz and they store some time consuming information like accoustic fingerprints and Musicbrainz ID into the file.
The next time it is not required to calculate that stuff again.

Also maybe Similarity coud utilize already stored IDs OR if they are sufficient in quality use them for precise comparison instead.

Also a cooperation with e.g. Jaikoz may be interesting as YOU make the best duplettes checking :-) !

14
I often loose the information when mp3 are moved or similarity cache is cleared.
If not too big it may make sense to put it into mp3 properties.

15
Bugs / Re: [SOLVED] Please Help: Automark ignores mark priority
« on: January 04, 2013, 00:12:29 »
OK, works thanks a lot!

Pages: [1] 2 3