fetchmail-friends
[Top] [All Lists]

[fetchmail] Regression: broken-up SIZE fetch FATAL for POP3 uidl keep (was: The 6.2.5 release of fetchmail is available)

2003-10-16 15:01:57
esr(_at_)thyrsus(_dot_)com (Eric S. Raymond) writes:

The 6.2.5 release of fetchmail is now available at the usual locations,
including <URL:http://www.catb.org/~esr/fetchmail>.

* Sunil Shetye's patches to break up fetching of sizes and UIDLs.

This is harmful for POP3:

fetchmail: POP3< 47 0c860cfd4ff14553e48ad15063ee9927
fetchmail: POP3< 48 651e6873d5f2bdcadb10e53d51f021b8
fetchmail: POP3< .
48 messages (40 seen) for XXXXXXXXXXXXX(_at_)gmx(_dot_)de at pop.gmx.net 
(637359 octets).
skipping message XXXXXXXXXXXXX(_at_)gmx(_dot_)de@pop.gmx.net:1 not flushed
skipping message XXXXXXXXXXXXX(_at_)gmx(_dot_)de@pop.gmx.net:2 not flushed
...
skipping message XXXXXXXXXXXXX(_at_)gmx(_dot_)de@pop.gmx.net:39 not flushed
skipping message XXXXXXXXXXXXX(_at_)gmx(_dot_)de@pop.gmx.net:40 not flushed
fetchmail: cannot get a range of message sizes (41-48).
fetchmail: POP3> QUIT
fetchmail: POP3< +OK bye
fetchmail: client/server protocol error while fetching from pop.gmx.net

When I backtrace this, I get:

Breakpoint 1, pop3_getpartialsizes (sock=7, first=41, last=48, sizes=0x30) at 
pop3.c:921
921     in pop3.c
(gdb) bt
#0  pop3_getpartialsizes (sock=7, first=41, last=48, sizes=0x30) at pop3.c:921
#1  0x08054daa in fetch_messages (mailserver_socket=7, ctl=0x8081900, count=48,
    msgsizes=0xbfffb120, maxfetch=0, fetches=0xbfffb348,
    dispatches=0xbfffb34c, deletions=0xbfffb350) at driver.c:521
#2  0x08055daa in do_session (ctl=0x8081900, proto=0x8067260, maxfetch=0)
    at driver.c:1449
#3  0x0805611b in do_protocol (ctl=0x8081900, proto=0x8067260) at driver.c:1622
#4  0x0804ed83 in doPOP3 (ctl=0xbfffb120) at pop3.c:1215
#5  0x080522d6 in query_host (ctl=0x8081900) at fetchmail.c:1373
#6  0x08050f16 in main (argc=3, argv=0xbffff534) at fetchmail.c:646

-- 
Matthias Andree

Encrypt your mail: my GnuPG key ID is 0x052E7D95

<Prev in Thread] Current Thread [Next in Thread>