i'm trying move big file (movie) redis cache in chunks. i'm using stackexchange.redis on windows box. redis configured allkey-lru , use append every second. maxmemory configured 100mb.
connectionmultiplexer redis2 = connectionmultiplexer.connect("localhost:6380, synctimeout=100000"); idatabase db2 = redis2.getdatabase(); const int chunksize = 4096; string filename = "d:\\movie.mp4"; using (var file = file.openread(filename)) { int bytesread; int incounter = 1; var buffer = new byte[chunksize]; while ((bytesread = file.read(buffer, 0, buffer.length)) > 0) { db2.stringset(file.name + incounter, buffer); incounter++; } } when chunk size chunksize = 4096 works great. when change chunk size 65536 server crashes following log:
[2816] 20 jul 23:06:42.300 * starting automatic rewriting of aof on 6766592700% growth [2816] 20 jul 23:06:42.331 * background append file rewriting started pid 3672 [3672] 20 jul 23:06:42.331 # write error writing append file on disk: invalid argument [3672] 20 jul 23:06:42.331 # rewriteappendonlyfile failed in qfork: invalid argument [2816] 20 jul 23:06:42.440 # fork operation complete [2816] 20 jul 23:06:42.440 # background aof rewrite terminated error [2816] 20 jul 23:06:42.549 * starting automatic rewriting of aof on 7232582200% growth [2816] 20 jul 23:06:42.581 * background append file rewriting started pid 1440 [2816] 20 jul 23:06:42.581 # out of memory allocating 10485768 bytes! [2816] 20 jul 23:06:42.581 # === redis bug report start: cut & paste starting here === [2816] 20 jul 23:06:42.581 # ------------------------------------------------ [2816] 20 jul 23:06:42.581 # !!! software failure. press left mouse button continue [2816] 20 jul 23:06:42.581 # guru meditation: "redis aborting out of memory" #..\src\redis.c:3467 [2816] 20 jul 23:06:42.581 # ------------------------------------------------ [1440] 20 jul 23:06:42.581 # write error writing append file on disk: invalid argument [1440] 20 jul 23:06:42.581 # rewriteappendonlyfile failed in qfork: invalid argument any ideas?
turned out quite interesting , surprising problem!
the real cause of memory fragmentation in allocator using (dlmalloc). msft going make better expect take time.
in meantime, workaround.
proper way of fixing (for now)
configure both maxmemory , maxheap parameters. make maxheap larger maxmemory.
so if want maxmemory=100mb make maxheap 5x or 10x larger, e.g maxheap=500mb or maxheap=1000mb. don't think there rule of thumb how larger maxheap needs why it's such tricky problem.
the effect of this: redis still try , keep memory usage under 100mb, actual physical memory in use may larger that, potentially maxheap value. how depends on level of fragmentation. in real life stay @ reasonable levels.
i have logged issue team. https://github.com/msopentech/redis/issues/274
edit: i've reworked answer in light of new knowledge. see previous versions in edit history.
Comments
Post a Comment