Discussion:
[Mldonkey-users] No buffer space available on FreeBSD
j***@hanffeld.net
2002-11-03 15:14:30 UTC
Permalink
hi list

anyone got this problem too?
i'm using 2.00 on fbsd 4.7-stable.
i get a lot of these messages from mldonkey:

Exception sendto failed: No buffer space available in sendto next

these come about every few minutes, and allways about 20-50 at once.
as an extra i got 'no more mbufs available' into syslog very often, so i
recompiled the kernel with maxusers 256 and nmbclusters=16384.
this was said to be enough for any high volume webserver, the mbufs
error message in syslog now disappeared, but mldonkey still keeps
complaining about No buffer space available in sendto next.

netstat -m says:

132/864/65536 mbufs in use (current/peak/max):
131 mbufs allocated to data
1 mbufs allocated to packet headers
129/294/16384 mbuf clusters in use (current/peak/max)
804 Kbytes allocated to network (1% of mb_map in use)
0 requests for memory denied
0 requests for memory delayed
0 calls to protocol drain routines

i have also played around a bit with the socket settings in client ->
bandwidth, but it didn't change anything.

still no buffer space available and i have the strange feeling this is
not good for the overall performance of mldonkey ;)

any advice would be very welcome

greetings
jonas
Bernd Walter
2002-11-04 12:21:39 UTC
Permalink
Post by j***@hanffeld.net
hi list
anyone got this problem too?
i'm using 2.00 on fbsd 4.7-stable.
Exception sendto failed: No buffer space available in sendto next
This is most likely a full interface queue.
You are running your line on the same machine so you get direct
responses.
As long as the bandwidth control doesn't work sufficient you have
to live with that.

Your netstat -m output shows that you are far away from touching the
mbuf limits.

BTW.: I'm running mldonkey on FreeBSD-current Alpha.
--
B.Walter COSMO-Project http://www.cosmo-project.de
***@cicely.de Usergroup ***@cosmo-project.de
Stephane Goulet
2002-11-04 18:01:17 UTC
Permalink
Hi Jonas,

Already had problems with these myself, if i remember well, next =
step
would be to tweak sysctl ( man sysctl ).

I put those in /etc/sysctl personally, maybe someone can tell me if i=
should
not had put such high values ;)

net.inet.tcp.recvspace=3D65535
net.inet.tcp.sendspace=3D65535
net.inet.ip.maxqueue=3D65535
net.inet.udp.recvspace=3D65535
net.inet.udp.sendspace=3D65535
kern.maxfiles=3D65535

(maxing out x.recvspace helps general net performance, i read that
somewhere...)

Hope it helps!

-St=E9phane
Bernd Walter
2002-11-04 19:12:35 UTC
Permalink
Post by Stephane Goulet
Hi Jonas,
Already had problems with these myself, if i remember well, next step
would be to tweak sysctl ( man sysctl ).
I put those in /etc/sysctl personally, maybe someone can tell me if i should
not had put such high values ;)
net.inet.tcp.recvspace=65535
This might create a communicationproblem with broken implementations
interpreting this as -1.
Post by Stephane Goulet
net.inet.tcp.sendspace=65535
This is already default
Post by Stephane Goulet
net.inet.ip.maxqueue=65535
net.inet.udp.recvspace=65535
net.inet.udp.sendspace=65535
These values are problematic as you can fill much more data, but as it
doesn't change your line speed you just degrade responisiveness.
This is just like the shop scenario at the cash desk - if many people
are waiting there it takes a long time before they get serviced.
It's better to limit the queue and tell them - no sense to wait - come
back later if you like.
An application can't wait forever for the response to a packet.
If the overall time is to long it's belied lost and resend.

Also keep in mind that bigger buffers also raise memory requirements.
Post by Stephane Goulet
kern.maxfiles=65535
It's much better to reduce the number of filehandles used by mldonkey.
I'm using max_opened_connections = 300 which is absolutely enough.
Post by Stephane Goulet
(maxing out x.recvspace helps general net performance, i read that
somewhere...)
Say you have a network delay of 100mS multiplied with a line bandwidth
of 100kBytes/s.
That means a single connect can have up to 10kBytes on the line, which
requieres a receivebuffer of at least 10k.
Every byte more is just a waste of memory.
In the mldonkey case you share the bandwidth with several connects so
the requirements for each individual socket are even lower.
The typical reason for increasing receive buffers are long latency
highspeed connects.
The typical reason for increasing send buffers are slow applications.

The system values of the socket buffers are only defaults and every
socket can have it's own values definied.
It's possible that mldonkey doesn't inherit the system defaults anyway.
--
B.Walter COSMO-Project http://www.cosmo-project.de
***@cicely.de Usergroup ***@cosmo-project.de
Loading...