Discussion:
Remote logging with autitd
Wouter van Verre
2014-11-01 21:49:24 UTC
Permalink
Hi all,

I am trying to set up logging using the audit framework, but I have some questions about how the system works and how the components fit together.

My use case is as follows:
* I would like to have one or more servers on my network capturing data, including TTY sessions.
* I would then like to have these servers (the 'client servers') submit the data to another server on the network (the 'central server').
* This central server would then write the incoming data to disk, and do some processing on the data as well.

My current idea on how to implement this is to:
* Run auditd + audisp + audisp-remote on every client server.
* Use pam_tty_audit.so on every client server for the TTY logging.
* Run auditd on the central server to receive the data and write it to disk.
* Either implement my processing tool such that it can be used instead of the dispatcher, or implement it as a plugin for audisp?

I'd love some feedback on whether this set up makes sense. In particular on whether receiving the data with auditd on the central server is the best way to go? And on which option is recommended for implementing the processing tool? I would think that a custom plugin for audisp would be best? If so, is there any documentation on how to go about implementing a plugin for audisp that I could read?

I have already experimented with this set up a bit, and have come to the conclusion that I am not sure how things work...
I have implemented a single client running auditd + audisp + audisp-remote with logging of TTY session (using pam_tty_audit.so), and a central server running auditd (with auditd configured to listen to port 60).

This seems to work to an extent:
* On the client server all the data is logged to /var/log/audit/audit.log and I can see it there.
* On the client server I can run "aureport --tty" and I will see the TTY session data represented more easily.
* When I am on the central server I can run "aureport --tty" and see the TTY session data for session on the client server.
My conclusion based on this is that the central server must be receiving and storing data properly?

* However, when I look at /var/log/audit/audit.log on the central server I can only see audit data for that server. So my question is, where does the audit data from the client server get stored?
* When I connect a very simple program to the auditd daemon (instead of the default dispatcher) it doesn't seem to receive any input at the moment, even though "aureport --tty" is showing that the daemon has been receiving data in the mean time...

Any help or pointers would be highly appreciated :)


Many thanks in advance,

Wouter
Steve Grubb
2014-11-02 18:12:53 UTC
Permalink
On Saturday, November 01, 2014 10:49:24 PM Wouter van Verre wrote:
> Hi all,
>
> I am trying to set up logging using the audit framework, but I have some
> questions about how the system works and how the components fit together.

This presentation is a pretty good overview, see slide 5:
http://people.redhat.com/sgrubb/audit/audit_ids_2011.pdf


> My use case is as follows:
> * I would like to have one or more servers on my network capturing data,
> including TTY sessions.
> * I would then like to have these servers (the 'client servers') submit the
> data to another server on the network (the 'central server').
> * This central server would then write the incoming data to disk, and do
> some processing on the data as well.
>
> My current idea on how to implement this is to:
> * Run auditd + audisp + audisp-remote on every client server.
> * Use pam_tty_audit.so on every client server for the TTY logging.
> * Run auditd on the central server to receive the data and write it to disk.
> * Either implement my processing tool such that it can be used instead of
> the dispatcher, or implement it as a plugin for audisp?

Sure. If necessary in realtime. That same presentation referenced about also
gives an introduction to the auparse library.


> I'd love some feedback on whether this set up makes sense. In particular on
> whether receiving the data with auditd on the central server is the best
> way to go? And on which option is recommended for implementing the
> processing tool? I would think that a custom plugin for audisp would be
> best? If so, is there any documentation on how to go about implementing a
> plugin for audisp that I could read?
>
> I have already experimented with this set up a bit, and have come to the
> conclusion that I am not sure how things work... I have implemented a
> single client running auditd + audisp + audisp-remote with logging of TTY
> session (using pam_tty_audit.so), and a central server running auditd (with
> auditd configured to listen to port 60).
>
> This seems to work to an extent:
> * On the client server all the data is logged to /var/log/audit/audit.log
> and I can see it there.
> * On the client server I can run "aureport --tty" and I will see the TTY
> session data represented more easily.
> * When I am on the central server I can run "aureport --tty" and see the TTY
> session data for session on the client server. My conclusion based on this
> is that the central server must be receiving and storing data properly?

Yes, that sounds right. I'd also mention that if you are doing central
logging, you need to tell audispd or auditd that you want the node name
prepended to the event so that at the aggregating server you can tell the
difference.


> * However, when I look at /var/log/audit/audit.log on the central server I
> can only see audit data for that server.

My first guess is that you don't have the client adding node information. That
makes it a lot clearer. You should able to search using --node to locate the
records from the client.


> So my question is, where does the audit data from the client server get
stored?

In the aggregating server's directory.

> * When I connect a very simple program to the auditd daemon (instead of the
> default dispatcher) it doesn't seem to receive any input at the moment, even
> though "aureport --tty" is showing that the daemon has been receiving data
> in the mean time...

The preferred way of adding analytical applications to make it am auditspd
plugin. You could make it a dispatcher if you want, but the interface is a bit
different. The audit tar ball should have an example program of both kinds.

-Steve
Wouter van Verre
2014-11-02 21:16:31 UTC
Permalink
Hi Steve,

Many thanks for your response.
I will be reading the presentation and the examples in the tarball and go from there for implementing my processing plugin.

Regarding the logging to disk on the central server:
I have node names set up for both servers now and am now getting the following behaviour:
On the client server I can see the events being prefixed with node=Elephant in the log on that server.
On the central server I can see that local events are being prefixed with node=Mongoose.
However, events that were sent to the central server by the client server show up in the central server's log with
node=localhost.localdomain. So it seems that the node information gets lost between the client and central server?

Would you have any idea why the node information is lost?


Many thanks,

Wouter

> From: ***@redhat.com
> To: linux-***@redhat.com
> CC: ***@outlook.com
> Subject: Re: Remote logging with autitd
> Date: Sun, 2 Nov 2014 13:12:53 -0500
>
> On Saturday, November 01, 2014 10:49:24 PM Wouter van Verre wrote:
> > Hi all,
> >
> > I am trying to set up logging using the audit framework, but I have some
> > questions about how the system works and how the components fit together.
>
> This presentation is a pretty good overview, see slide 5:
> http://people.redhat.com/sgrubb/audit/audit_ids_2011.pdf
>
>
> > My use case is as follows:
> > * I would like to have one or more servers on my network capturing data,
> > including TTY sessions.
> > * I would then like to have these servers (the 'client servers') submit the
> > data to another server on the network (the 'central server').
> > * This central server would then write the incoming data to disk, and do
> > some processing on the data as well.
> >
> > My current idea on how to implement this is to:
> > * Run auditd + audisp + audisp-remote on every client server.
> > * Use pam_tty_audit.so on every client server for the TTY logging.
> > * Run auditd on the central server to receive the data and write it to disk.
> > * Either implement my processing tool such that it can be used instead of
> > the dispatcher, or implement it as a plugin for audisp?
>
> Sure. If necessary in realtime. That same presentation referenced about also
> gives an introduction to the auparse library.
>
>
> > I'd love some feedback on whether this set up makes sense. In particular on
> > whether receiving the data with auditd on the central server is the best
> > way to go? And on which option is recommended for implementing the
> > processing tool? I would think that a custom plugin for audisp would be
> > best? If so, is there any documentation on how to go about implementing a
> > plugin for audisp that I could read?
> >
> > I have already experimented with this set up a bit, and have come to the
> > conclusion that I am not sure how things work... I have implemented a
> > single client running auditd + audisp + audisp-remote with logging of TTY
> > session (using pam_tty_audit.so), and a central server running auditd (with
> > auditd configured to listen to port 60).
> >
> > This seems to work to an extent:
> > * On the client server all the data is logged to /var/log/audit/audit.log
> > and I can see it there.
> > * On the client server I can run "aureport --tty" and I will see the TTY
> > session data represented more easily.
> > * When I am on the central server I can run "aureport --tty" and see the TTY
> > session data for session on the client server. My conclusion based on this
> > is that the central server must be receiving and storing data properly?
>
> Yes, that sounds right. I'd also mention that if you are doing central
> logging, you need to tell audispd or auditd that you want the node name
> prepended to the event so that at the aggregating server you can tell the
> difference.
>
>
> > * However, when I look at /var/log/audit/audit.log on the central server I
> > can only see audit data for that server.
>
> My first guess is that you don't have the client adding node information. That
> makes it a lot clearer. You should able to search using --node to locate the
> records from the client.
>
>
> > So my question is, where does the audit data from the client server get
> stored?
>
> In the aggregating server's directory.
>
> > * When I connect a very simple program to the auditd daemon (instead of the
> > default dispatcher) it doesn't seem to receive any input at the moment, even
> > though "aureport --tty" is showing that the daemon has been receiving data
> > in the mean time...
>
> The preferred way of adding analytical applications to make it am auditspd
> plugin. You could make it a dispatcher if you want, but the interface is a bit
> different. The audit tar ball should have an example program of both kinds.
>
> -Steve
LC Bruzenak
2014-11-02 21:25:50 UTC
Permalink
On 11/02/2014 03:16 PM, Wouter van Verre wrote:
> Hi Steve,
>
> Many thanks for your response.
> I will be reading the presentation and the examples in the tarball and
> go from there for implementing my processing plugin.
>
> Regarding the logging to disk on the central server:
> I have node names set up for both servers now and am now getting the
> following behaviour:
> On the client server I can see the events being prefixed with
> node=Elephant in the log on that server.
> On the central server I can see that local events are being
> prefixed with node=Mongoose.
> However, events that were sent to the central server by the client
> server show up in the central server's log with
> node=localhost.localdomain. So it seems that the node information
> gets lost between the client and central server?
>
> Would you have any idea why the node information is lost?
>
>
> Many thanks,
>
> Wouter

Check /etc/audisp/audispd.conf on your client.
Look at the line with "name_format=" and it probably says "hostname"
(case insensitive).
Test this by checking "% hostname" command on your client.
See the audispd.conf man page for more info.

LCB

--
LC (Lenny) Bruzenak
***@magitekltd.com
Wouter van Verre
2014-11-02 22:09:11 UTC
Permalink
That fixed that issue.
Many thanks!

I'm going to have a look at implementing the plugin tomorrow.

Cheers!

Date: Sun, 2 Nov 2014 15:25:50 -0600
From: ***@magitekltd.com
To: linux-***@redhat.com
Subject: Re: Remote logging with autitd






On 11/02/2014 03:16 PM, Wouter van
Verre wrote:




Hi Steve,



Many thanks for your response.

I will be reading the presentation and the examples in the
tarball and go from there for implementing my processing plugin.



Regarding the logging to disk on the central server:

I have node names set up for both servers now and am now getting
the following behaviour:

On the client server I can see the events being prefixed with
node=Elephant in the log on that server.

On the central server I can see that local events are being
prefixed with node=Mongoose.

However, events that were sent to the central server by the
client server show up in the central server's log with

node=localhost.localdomain. So it seems that the node
information gets lost between the client and central server?



Would you have any idea why the node information is lost?





Many thanks,



Wouter





Check /etc/audisp/audispd.conf on your client.

Look at the line with "name_format=" and it probably says
"hostname" (case insensitive).

Test this by checking "% hostname" command on your client.

See the audispd.conf man page for more info.



LCB

--
LC (Lenny) Bruzenak
***@magitekltd.com
Wouter van Verre
2014-11-13 22:23:59 UTC
Permalink
Hi Steve (and others),

Many thanks for the presentation, it has been very helpful. I have started to work on a simple plugin in Python, but I got a bit stuck again.
At the moment it just logs all data on STDIN to a file in /tmp.

Right now the system is set up such that on the client server the audit data gets sent to my central server using audisp + audisp-remote.
The central server receives the data using auditd, logs it /var/log/audit/audit.log and sends it to audisp, which in turn sends it to my plugin.
I can see audit events from both my client and central server being written in /var/log/audit/audit.log, as expected. The two are easily extinguished by the node names.

However, in my plugin I only seems to receive data from the central (i.e. local) server... I draw this conclusion both because I see only one node name, and also because I generate TTY events on the client server only (and they show in /var/log/audit/audit.log as expected), and these do not show in the output from my plugin.
Is this the expected behaviour? Are plugins only supposed to receive the locally generated audit events?
If it is, is there a way to forward the remotely generated data to a plugin on the central server?

Any help would be much appreciated.


Many thanks,

Wouter

From: ***@outlook.com
To: ***@redhat.com; linux-***@redhat.com
Subject: RE: Remote logging with autitd
Date: Sun, 2 Nov 2014 22:16:31 +0100




Hi Steve,

Many thanks for your response.
I will be reading the presentation and the examples in the tarball and go from there for implementing my processing plugin.

Regarding the logging to disk on the central server:
I have node names set up for both servers now and am now getting the following behaviour:
On the client server I can see the events being prefixed with node=Elephant in the log on that server.
On the central server I can see that local events are being prefixed with node=Mongoose.
However, events that were sent to the central server by the client server show up in the central server's log with
node=localhost.localdomain. So it seems that the node information gets lost between the client and central server?

Would you have any idea why the node information is lost?


Many thanks,

Wouter

> From: ***@redhat.com
> To: linux-***@redhat.com
> CC: ***@outlook.com
> Subject: Re: Remote logging with autitd
> Date: Sun, 2 Nov 2014 13:12:53 -0500
>
> On Saturday, November 01, 2014 10:49:24 PM Wouter van Verre wrote:
> > Hi all,
> >
> > I am trying to set up logging using the audit framework, but I have some
> > questions about how the system works and how the components fit together.
>
> This presentation is a pretty good overview, see slide 5:
> http://people.redhat.com/sgrubb/audit/audit_ids_2011.pdf
>
>
> > My use case is as follows:
> > * I would like to have one or more servers on my network capturing data,
> > including TTY sessions.
> > * I would then like to have these servers (the 'client servers') submit the
> > data to another server on the network (the 'central server').
> > * This central server would then write the incoming data to disk, and do
> > some processing on the data as well.
> >
> > My current idea on how to implement this is to:
> > * Run auditd + audisp + audisp-remote on every client server.
> > * Use pam_tty_audit.so on every client server for the TTY logging.
> > * Run auditd on the central server to receive the data and write it to disk.
> > * Either implement my processing tool such that it can be used instead of
> > the dispatcher, or implement it as a plugin for audisp?
>
> Sure. If necessary in realtime. That same presentation referenced about also
> gives an introduction to the auparse library.
>
>
> > I'd love some feedback on whether this set up makes sense. In particular on
> > whether receiving the data with auditd on the central server is the best
> > way to go? And on which option is recommended for implementing the
> > processing tool? I would think that a custom plugin for audisp would be
> > best? If so, is there any documentation on how to go about implementing a
> > plugin for audisp that I could read?
> >
> > I have already experimented with this set up a bit, and have come to the
> > conclusion that I am not sure how things work... I have implemented a
> > single client running auditd + audisp + audisp-remote with logging of TTY
> > session (using pam_tty_audit.so), and a central server running auditd (with
> > auditd configured to listen to port 60).
> >
> > This seems to work to an extent:
> > * On the client server all the data is logged to /var/log/audit/audit.log
> > and I can see it there.
> > * On the client server I can run "aureport --tty" and I will see the TTY
> > session data represented more easily.
> > * When I am on the central server I can run "aureport --tty" and see the TTY
> > session data for session on the client server. My conclusion based on this
> > is that the central server must be receiving and storing data properly?
>
> Yes, that sounds right. I'd also mention that if you are doing central
> logging, you need to tell audispd or auditd that you want the node name
> prepended to the event so that at the aggregating server you can tell the
> difference.
>
>
> > * However, when I look at /var/log/audit/audit.log on the central server I
> > can only see audit data for that server.
>
> My first guess is that you don't have the client adding node information. That
> makes it a lot clearer. You should able to search using --node to locate the
> records from the client.
>
>
> > So my question is, where does the audit data from the client server get
> stored?
>
> In the aggregating server's directory.
>
> > * When I connect a very simple program to the auditd daemon (instead of the
> > default dispatcher) it doesn't seem to receive any input at the moment, even
> > though "aureport --tty" is showing that the daemon has been receiving data
> > in the mean time...
>
> The preferred way of adding analytical applications to make it am auditspd
> plugin. You could make it a dispatcher if you want, but the interface is a bit
> different. The audit tar ball should have an example program of both kinds.
>
> -Steve
Steve Grubb
2014-11-14 02:44:53 UTC
Permalink
On Thursday, November 13, 2014 11:23:59 PM Wouter van Verre wrote:
> However, in my plugin I only seems to receive data from the central (i.e.
> local) server...

The feed to audispd, right now, is before receiving remote events. Meaning
that audispd only sees local events and never aggregate events...as things are
now.

> I draw this conclusion both because I see only one node name, and also
> because I generate TTY events on the client server only (and they show in
> /var/log/audit/audit.log as expected), and these do not show in the output
> from my plugin. Is this the expected behaviour?

Today, yes.

> Are plugins only supposed to receive the locally generated audit events? If
> it is, is there a way to forward the remotely generated data to a plugin on
> the central server?

Yes, and it would take some changes to the listening code to insert the events
at the right point in the event loop.

-Steve
David Flatley
2014-11-14 15:16:12 UTC
Permalink
While checking audit logs for failed logins, It was noticed that the
AUID was one name and there was a UID of the user that failed login. The
only thing we can figure is that the AUID user rebooted the system
by logging in as himself and then using sudo to reboot the system prior to
the fails. Are we correct in this assumption?


David Flatley
"To err is human. To really screw up requires the root password." -UNKNOWN
Steve Grubb
2014-11-14 15:26:26 UTC
Permalink
On Friday, November 14, 2014 10:16:12 AM David Flatley wrote:
> While checking audit logs for failed logins, It was noticed that the
> AUID was one name and there was a UID of the user that failed login. The
> only thing we can figure is that the AUID user rebooted the system
> by logging in as himself and then using sudo to reboot the system prior to
> the fails. Are we correct in this assumption?

Maybe. If the auid was someone with admin powers, they might have restarted a
daemon which would insert their auid into the daemon and then cause other
user's logins to be wrong. But generally when auid!=uid, then they have used
sudo or su.

-Steve
Wouter van Verre
2014-11-18 12:21:23 UTC
Permalink
Hi Steve,

Many thanks for your response. I made an attempt to modify the code in order to make it aggregate events. 
I am not quite happy with the way the changes ended up looking, nor with how the resulting log file looked. 
I do plan to have another go at this in the future, but for now I'm going to move on by using a different set up,
where the plugin will run locally and I am gonna send the parsed data to a remote machine for storage.

I have some questions for that as well, but I will post those in a new thread.

Cheers,

Wouter

----------------------------------------
> From: ***@redhat.com
> To: ***@outlook.com
> CC: linux-***@redhat.com
> Subject: Re: Remote logging with autitd
> Date: Thu, 13 Nov 2014 21:44:53 -0500
>
> On Thursday, November 13, 2014 11:23:59 PM Wouter van Verre wrote:
>> However, in my plugin I only seems to receive data from the central (i.e.
>> local) server...
>
> The feed to audispd, right now, is before receiving remote events. Meaning
> that audispd only sees local events and never aggregate events...as things are
> now.
>
>> I draw this conclusion both because I see only one node name, and also
>> because I generate TTY events on the client server only (and they show in
>> /var/log/audit/audit.log as expected), and these do not show in the output
>> from my plugin. Is this the expected behaviour?
>
> Today, yes.
>
>> Are plugins only supposed to receive the locally generated audit events? If
>> it is, is there a way to forward the remotely generated data to a plugin on
>> the central server?
>
> Yes, and it would take some changes to the listening code to insert the events
> at the right point in the event loop.
>
> -Steve
>
Loading...