Age | Commit message (Collapse) | Author | Files | Lines |
|
Kevin says:
With netsniff-ng 0.5.8-rc2+, when I run the below packet capture
session, the output seems to imply that 64K of memory is being
allocated per frame, which does not look like what I want since my
interface MTU is only 1500. This appears to be severely limiting
the number of frames I can fit into my packet capture ring.
As TPACKET_V3 is used in capturing to pcap files, frames are written
continuously to the ring, thus the above will give a wrong impression
to the user. Therefore, output such information in verbose mode
differently when TPACKET_V3 is being used, as it works block-wise.
Reported-by: Kevin Branch <branchnetconsulting@gmail.com>
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
|
|
Prepare TPACKET_V3 for allowing to transparently setting up the
frame structure such that we do not need to change much in the
netsniff-ng/trafgen code.
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
|
|
We do not want to maintain duplicate code, so move this into a separate
file and name those *_generic() helpers.
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
|
|
There's no good reason why we currently waste an 'int' for
jumbo_support while this must better be done as 'bool'.
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
|
|
Prepare setup_rx_ring_layout for both, v2 and v3. Also do some checks
during compile time if offsets stay the same as we operate on different
union mappings.
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
|
|
Rename it to set_sockopt_tpacket_v2 so that we later on can also
add other versions and have it clearly stated which one we use.
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
|
|
If we unmap TX ring buffers and still have timer shots that trigger
the kernel to traverse the TX_RING, it can send out random crap in
some situations. Prevent this by destroying the timer and flush the
TX_RING first in wait mode.
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
|
|
In both, the RX_RING and TX_RING we need to unmap first and then destroy
the buffer, otherwise, we get a device or resource busy.
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
|
|
If something screws up, which is rather unlikely, but if it happens,
let the user know.
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
|
|
We decided to get rid of the old Git history and start a new one for
several reasons:
*) Allow / enforce only high-quality commits (which was not the case
for many commits in the history), have a policy that is more close
to the one from the Linux kernel. With high quality commits, we
mean code that is logically split into commits and commit messages
that are signed-off and have a proper subject and message body.
We do not allow automatic Github merges anymore, since they are
total bullshit. However, we will either cherry-pick your patches
or pull them manually.
*) The old archive was about ~27MB for no particular good reason.
This basically derived from the bad decision that also some PDF
files where stored there. From this moment onwards, no binary
objects are allowed to be stored in this repository anymore.
The old archive is not wiped away from the Internet. You will still
be able to find it, e.g. on git.cryptoism.org etc.
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Signed-off-by: Tobias Klauser <tklauser@distanz.ch>
|