| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Cygwin's shell (which is what's used in MSYS2 and git-bash, the latter
of which appears to be installed by Cirrus CI) screws up massively when
constructing the UN*X path from the Windows PATH if any elements in the
Windows PATH have blanks - such as anything under C:\Program Files.
This causes the UN*X path to contain mangled versions of the git-bash
executable directories, so that, for example,
C:\Program Files\Git\usr\bin
is turned into
/usr/bin
rather than into
/c/Windows/Program Files/Git/usr/bin
(which needs the blanks to be escaped *if you type it on the command
line*, but is *perfectly OK with the spaces in it* as part of an
environment variable!).
As a result, none of the git-bash commands are found.
This... is not good.
See if the Windows path is imported in the global or personal shell
profiles.
|
| | |
| | |
| | |
| | | |
Just dump the full environment.
|
| | |
| | |
| | |
| | |
| | |
| | | |
Show the values of the Windows and UN*X paths, so we can see what's in
them and figure out what we need to add to translate the Windows path to
a UN*X path.
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
BSD sed:
sed: 1: "s/^.* Microsoft ($) C/C ...": bad flag in substitute command: 'm'
GNU sed:
sed: -e expression #1, char 73: unknown option to `s'
Address a couple other typos while at it.
|
| | |
| | |
| | |
| | |
| | | |
Both are in POSIX, but the binaries provided with the Git-for-Windows in
Cirrus CI include expr but not bc.
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Move env: before install_script:, as it should affect both.
Add MSYSTEM: MINGW64, so that we do a MinGW64 build rather than an MSYS
build, as we're building native Windows software.
Do some directory searches to see what binaries come with Git and where
uname came from (not from MSYS2, as we weren't installing it when we
first tested it).
Try using a Windows path to build_matrix.sh.
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | | |
I have no idea what the hell comes in the container by default, but sh
appears to be from Git. We need something closr to a UN*X environment
than that if we're going to use the build scripts on Windows.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Dear Cirrus: I wouldn't be pounding on your buildbots if you had
bothered to document MSYS somehwere on cirrus-ci.com or cirrus-ci.org,
but if I look for msys or msys2 on either of those sites, I get the
smirking ice-fishing daemon from Google.
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | | |
There only appears to be one drive, but dir C:\ doesn't show C:\Users,
which is the directory in which the job was run.
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | | |
No C:\tools, let's walk down from the top.
|
| | |
| | |
| | |
| | |
| | |
| | | |
Something I found in a Chromium .cirrus.yml file suggests that pacman
might be in C:\tools\msys64\usr\bin\pacman; first, check whether there
is a C:\tools and, if so, what's in there.
|
| | |
| | |
| | |
| | |
| | | |
cmd.exe doesn't find it; it's probably an MSYS2 command and requires
that it be run from the MSYS2 shell.
|
| | |
| | |
| | |
| | | |
It doesn't appear to be installed by default.
|
| | |
| | |
| | |
| | |
| | | |
Add MSYS_NT/Visual Studio support to build_common.sh, turn Windows build
on.
|
| | |
| | |
| | |
| | |
| | | |
The UN*Xy stuff is claimed by uname to be "MSYS_NT-10.0-17763", i.e.
it's MSYS of some sort.
|
| | |
| | |
| | |
| | | |
Poking the beast to see what changes will be needed to build scripts.
|
| | |
| | |
| | |
| | | |
It's a good start; now we have to make the build scripts work.
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Use "curl -O" to put the download in a file in the current directory
with the same name as the last component of the URL.
Use "curl -o" to put the download in a file in the current directory
with a specified name.
Quote the URL for the AirPcap SDK, just in case it's being treated
weirdly a Windows argument-list parser.
|
| | |
| | |
| | |
| | |
| | |
| | | |
It looks as if we have a Bourne shell here....
Also, squelch progress reports in Curl, to avoid noise.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
See if I can install the WinPcap developer's pack the same way we do on
AppVeyor.
See if we have something resembling a Bourne shell available. (If so,
we may be able to let the build scripts we use on UN*Xes do the work on
Windows as well.)
|
| | |
| | |
| | |
| | | |
Downloading Curl produced a *lot* of noise.
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The OpenSSL library on the current AppVeyor Visual Studio 2019 images
has a weird opensslv.h that claims its 1.0.2, even though it's 3.0.
This causes... problems.
For now, we disable the remote capture build there.
Back out the debugging stuff and the attempt to fix it in sslutils.c -
the weird opensslv.h causes that not to work.
|
| | |
| | |
| | |
| | | |
[skip ci]
|
| | |
| | |
| | |
| | |
| | |
| | | |
Where *did* that file come from?
[skip ci]
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Do a grep^Wfind for OPENSSL_VERSION in all .h files in the directory
containing opensslv.h, to see what sets it, and what they set it to.
Again, AppVeyor (and anything that doesn't recognize "skip ci" at
all)-only.
[skip ci]
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Putting it on the header suppresses AppVeyor builds; putting it in the
body might still not do so.
Back out a no-longer-needed debugging message, so we have something to
commit.
[skip ci]
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Remove all the pre-VS 2019 AppVeyor builds, as they don't have the
problematic OpenSSL.
In the build script, look for any opensslv.h* file under the top-level
include directory of the problematic SSL; that's the file that sets the
OPENSSL_VERSION variabls.
Add [skip ci] in the hopes of provoking only AppVeyor (and OpenCSW?) to
build this commit.
|
| | |
| | |
| | |
| | |
| | | |
This purports to be 3.0, and doesn't offer some old routines, but is
claiing to be 1.0.2!
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The Shiny New OpenSSL 3.0.8 on the AppVeyor images with Visual Studio
2019 and later are missing some routines that have, apparently, been
deprecated since 1.1.0. If we have OpenSSL 1.1.0 or later, use the
replacements.
|
| | |
| | |
| | |
| | |
| | | |
AppVeyor Updated Something, and VS 2019 is getting some weird linking
error with the new version of OpenSSL installed on that image.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
That's where the binding is done, so that's where we need to get a zone
ID and interface name within that zone from a device name.
This is based on pull request #1202, but reports errors correctly, and
works even when built on older versions of Solaris 11 without zone
support in BIOCSETLIF.
Fix/update some comments while we're at it.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
"Invalid argument" doesn't indicate that the problem is that the zone in
question doesn't exist and "File name too long" doesn't indicate that
the problem is that the zone in question doesn't exist because it can't
possibly exist because zone names that long are not allowed.
Report both as "no such device on which to capture" rather than just as
a generic error.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Move the stuff to figure out for what architectures to build before we
do anything with rpcapd.
Have that stuff decide both what architectures to build libraries for
and what architectures to build executables for. Call the executables
list OSX_EXECUTABLE_ARCHITECTURES rather than OSX_PROGRAM_ARCHITECTURES,
to match the terminology used in the names of the "add a target"
commands.
Remove the not-updated-for-ARM stuff in rpcapd's CMakeLists.txt, and
just use the architecture list we picked in the main CMakeLists.txt.
Remove some debugging messages.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
If libpcap isn't going to support remote capture, there's nothing to for
which to use TLS.
This means we don't do the "make sure we have a universal build of
libssl" check when figuring out whether to do a universal build of
libpcap.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The SuSv4 description of make at
https://pubs.opengroup.org/onlinepubs/9699919799/utilities/make.html
says, in "Makefile syntax", that "Blank lines, empty lines, and lines
with <number-sign> ( '#' ) as the first character on the line are also
known as comment lines."
In at least some versions of make, lines in the set of target rules shoe
first character is a tab and whose *second* character is a # are treated
as commands, not comments. Given that shells normally treat # as the
beginning of a comment, those end up being "commands" that do nothing
and return an exit status of 0, so they happen to work, but they
1) may cause a shell process to be created to run them;
2) cause extra output from make if it's printing comments.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
* amount
* anymore
* authentication
* availability
* bracket
* captured
* casted
* communications
* compliant
* configurable
* cumulate
* deinitialize
* descriptors
* didn't
* disassembler
* disassociate
* distributions
* divvy
* doing
* entries
* everything
* explicitly
* explosion
* expression
* extracting
* failed
* family
* find
* github
* global
* implementations
* incorrectly
* intel
* interlocked
* justifying
* know
* launched
* libraries
* malloced
* mask
* maximum
* network
* nonexistent
* number
* occurred
* optimizer
* overflow
* overwrite lower
* packet
* packetfilter
* packets
* parse hosts
* payload
* phase
* programmers
* promiscuous
* protocol
* receiving
* redefinition
* sampling
* savefile
* schwartz
* should
* snapshot
* something
* specifies
* straightforward
* stream
* subdir
* support
* surrogate
* suse
* system is
* test with
* than
* those
* unmaintained
* valid
* way
* western
* wireshark
Signed-off-by: Josh Soref <2119212+jsoref@users.noreply.github.com>
|
|\ \ \
| | | |
| | | | |
If we can't allocate a DLT_ list, fail.
|
|/ / /
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Some code already was doing that (for example, pcap-bpf.c if fetching
the DLT list with an ioctl), and, if you can't allocate a DLT_ list,
which is usually pretty small, you may have other memory allocation
problems later, so letting the program open an interface (and not get a
correct list of all link-layer types supported) may not be worth it.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
If you've gotten to that point in the code, p->dlt_list is non-null, so
there's no need to check for it being null. (If it was null when
dag_get_datalink() was entered, it would try to allocate it and, if that
failed, would return an error.)
|