сряда, 14 ноември 2018 г.

Zabbix 4.0.1 on CentOS 7 with SELinux on

I have been playing with Zabbix 4.0.1 on CentOS 7 and I noticed that if the SELinux is turned on, the server won't start. I investigated for a denials the audit.log with ausearch and this is what I got from my system:
# ausearch -m avc -c zabbix
time->Thu Nov  8 16:06:07 2018
type=PROCTITLE msg=audit(1541685967.785:315): proctitle=2F7573722F7362696E2F7A61626269785F7365727665723A20616C657274206D616E61676572202331207374617274696E67
type=SYSCALL msg=audit(1541685967.785:315): arch=c000003e syscall=49 success=no exit=-13 a0=7 a1=7ffc2589bd50 a2=6e a3=8 items=0 ppid=12052 pid=12085 auid=4294967295 uid=996 gid=992 euid=996 suid=996 fsuid=996 egid=992 sgid=992 fsgid=992 tty=(none) ses=4294967295 comm="zabbix_server" exe="/usr/sbin/zabbix_server_mysql" subj=system_u:system_r:zabbix_t:s0 key=(null)
type=AVC msg=audit(1541685967.785:315): avc:  denied  { create } for  pid=12085 comm="zabbix_server" name="zabbix_server_alerter.sock" scontext=system_u:system_r:zabbix_t:s0 tcontext=system_u:object_r:zabbix_var_run_t:s0 tclass=sock_file
----
time->Thu Nov  8 16:06:17 2018
type=PROCTITLE msg=audit(1541685977.984:320): proctitle=2F7573722F7362696E2F7A61626269785F7365727665723A2070726570726F63657373696E67206D616E61676572202331207374617274696E67
type=SYSCALL msg=audit(1541685977.984:320): arch=c000003e syscall=49 success=no exit=-13 a0=7 a1=7ffc8e09ec20 a2=6e a3=8 items=0 ppid=12108 pid=12142 auid=4294967295 uid=996 gid=992 euid=996 suid=996 fsuid=996 egid=992 sgid=992 fsgid=992 tty=(none) ses=4294967295 comm="zabbix_server" exe="/usr/sbin/zabbix_server_mysql" subj=system_u:system_r:zabbix_t:s0 key=(null)
type=AVC msg=audit(1541685977.984:320): avc:  denied  { create } for  pid=12142 comm="zabbix_server" name="zabbix_server_preprocessing.sock" scontext=system_u:system_r:zabbix_t:s0 tcontext=system_u:object_r:zabbix_var_run_t:s0 tclass=sock_file
----
The Zabbix server process took ages to shutdown and even restarting the whole VM took very long time. I turned off SELinux and the problem disappeared. However I like SELinux and I want to have it on. I decided to dig a little and find why the zabbix_t context can't write on zabbix_var_run_t context. So I executed:
# sepolicy transition -t zabbix_t
And I got the following error:
ValueError: zabbix_var_run_t must be an SELinux process domain:
Valid domains: abrt_t, abrt_dump_oops_t, abrt_handle_event_t, abrt_helper_t.........
The output is pretty big, but I truncated the most interesting part. It appears that the zabbix_var_run_t context is not defined at all. So I decided to put only to permissive the zabbix_t context and left the SELinux in enforcing mode.
# semanage permissive -a zabbix_t
You can checked for sure with:
# semodule -l | grep permissive
And now restarting the Zabbix server isn't a problem and we still have SELinux running.
Info about SELinux troubleshooting gathered from RedHat Documentation.

петък, 12 декември 2014 г.

Dell Deployment Toolkit (DTK) and Cobbler


Hello everybody, it is been so long since my last post here, but now I'm back :) and today I'm going to describe my last assignment for provisioning system.
So let's get it started straight with the problem. I got a few racks filled with Dell servers and they should be configured and provisioned with Linux as fast as I can do it. After wondering which provisioning solution to choose I pick cobbler.

For anyone, who configured Dell servers, knows that the basic configuration of BIOS, iDRAC and RAID can costs you almost half of the time of installing OS. With so much hardware I'll need 2-3 days doing the same task like a monkey, only to prepare the servers for OS deployment. So I decided to go with some sort of auto configuration solution, that will help me, deliver them times faster. As I mentioned I have already decided to use provisioning services for OS deployment and start digging this direction. After some googling I found that Dell provides automatic deployment (DTK) tool for configuring the system (BIOS, iDRAC, RAID) and even installing an OS. Unfortunately, it is highly limited to RHEL, SUSE Enterprise and Windows. But it is superb for configuring a bare metal server and you got working iDRAC for no time.
You can get the Dell Deployment Toolkit from here. Unfortunately after a while, the link for downloading OpenManage Deployment Toolkit for Linux x64 became broken, but after some googling I found the most recent (as the date of these writing) DTK version from 11 November 2014, you can get it from here. You can mount it and see what we got in the iso file:

# mount -t iso9660 -o loop dtk_4.4_1294_Linux64_A01.iso /mnt

Two of the directories are interesting to us. The first one is RPMs/ and the second one is isolinux/. Before we continue let's clear out what are our possibilities and how we can achieve our goal with Dell DTK. First we need some tools to export the existing configuration from already installed and configured server. Then there should be some environment in which we can upload already exported files and import them on a new system. We need some sort of script that runs in minimal OS environment and changes the BIOS configuration (I'll use for short BIOS, but I mean all of them BIOS, iDRAC and RAID) and after a one or two restarts we got working iDRAC and fortunately RAID configuration. This is our goal and on the other hand Dell DTK provide us with exactly this functionality. The three tools that we will do the job are: SYSCFG, RAIDCFG and RACADM. These tools along with others can be found in the RPMs/ directory. There is a very nice video tutorial how to install them on RHEL6/ CentOS 6 in the Dell DTK Wiki. We can save the current configuration to a file, edit it and installed it directly to a new server with the same tools.
The first one, which I'm going to use will be SYSCFG. It can be used for configuring primarily BIOS. You can dump your whole current configuration with the command:


# syscfg -o bios-settings.ini

The output file is in ini format and it is quite long. I'm not going to discuss different parts of the file. You can see what every variable sets from the Dell OpenManage Deployment Toolkit Command Line Interface Reference Guide (I got it from here). If you considering using the DTK seriously have a look at its user's guide.

Once you got the BIOS configuration it is good to get the RAID one as well with:

# raidcfg -o=raidcfg.ini


The different configuration parts can be found in the Reference guide as well. The last thing we need is iDRAC configuration and we can get it with the most powerful tool, which is capable of configuring some of the BIOS settings as well, RACADM. RACADM is actually a tool for cli connection to iDRAC module mostly through SSH. You can read and write different configuration variables, export and import a configuration, create users. It is quite flexible and most of the information can be found here. Actually, there is more recent version of iDRAC – v. 8, but my environment consist of version 7 modules. So to export your current iDRAC configuration type:


# racadm getconfig -f idrac-config.ini

As the others files this one has INI syntax, but it is quite large. Almost all of the iDRAC configuration options can be found there. However there are few, that are not present. Complete list of options can be found in the reference guide
Finally we got a configured server, that is gonna be used as a template and we took all of its configuration in files we will use in future deployments.

петък, 15 март 2013 г.

OSSEC issues connecting to MySQL


Hello again, today I'm going to do some troubleshooting with OSSEC and MySQL. OSSEC is very powerful host-based intrusion detection system, which can help you monitor in real time attacks that target your infrastructure. Main features are monitoring local and remote logs, rootkit detection, file integrity checking, it can be configured even for active response. More about OSSEC features can be found here.

The OSSEC system, by default, put all of its logs in a plain text inside /var/ossec/logs/. However there is an option all of the alerts and logs activity to be put inside a database. The two options are – PostgreSQL and MySQL. The biggest advantage of using a database is to monitor in real time attacks against your servers using some graphic tool. There are a few web-based tools, that I found, which can be connected to MySQL server, used by OSSEC. The first one is OSSEC Web UI (OSWUI) which can be downloaded from here. The second one, that I found, is Splunk add-on and it can be downloaded from here. The last web-based GUI, which I chose was Analogi.

The problem – connecting OSSEC with MySQL


The installation went very well, I just followed the installation instructions. I added new repository for OSSEC in CentOS, it appears I need EPEL repository too, because of missing dependencies. I installed OSSEC and MySQL through yum and it reports that the MySQL was installed from ossec repository (atomic), that I added earlier. The MySQL version is 5.5.30 and OSSEC version is 2.7.
I followed the instructions for connecting and configuring OSSEC with MySQL from the OSSEC documentation and everything seemed to be ok. Just two things to mention – if the mysql server is on the localhost, inside /var/ossec/etc/ossec.conf in the <hostname> tags put 127.0.0.1, I tried with 'localhost', but it gave me error inside the /var/ossec/logs/ossec.log. The other thing to watch for is the mysql.schema file. On RedHat/CentOS systems the file could be found under /var/ossec/etc/mysql/ there is no need for downloading the source.

After I installed everything I started watching the ossec.log file for errors. I connected several times with a wrong username/password through ssh to generate some alerts and then in the log appear the following error message:

ERROR: Error executing query 'INSERT INTO server(last_contact, version, hostname, information) VALUES ('1362937495', 'v2.7', 'monitor.example.com', 'Linux monitor.example.com 2.6.32-279.22.1.el6.x86_64 #1 SMP Wed Feb 6 03:10:46 UTC 2013 x86_64 - OSSEC HIDS v2.7')'. Error: 'Lock wait timeout exceeded; try restarting transaction'.

After digging in the database I found this lock:

TRANSACTIONS
------------
Trx id counter E402
Purge done for trx's n:o < 9B28 undo n:o < 0
History list length 303
LIST OF TRANSACTIONS FOR EACH SESSION:
---TRANSACTION 0, not started
MySQL thread id 5, OS thread handle 0x7f28f821f700, query id 8354 localhost root
SHOW ENGINE INNODB Status
---TRANSACTION E401, ACTIVE 22 sec inserting
mysql tables in use 1, locked 1
LOCK WAIT 2 lock struct(s), heap size 376, 1 row lock(s), undo log entries 1
MySQL thread id 4, OS thread handle 0x7f28f8260700, query id 8352 localhost 127.0.0.1 ossecuser update
INSERT INTO server(last_contact, version, hostname, information) VALUES ('1362991366', 'v2.7', 'monitor.example.com', 'Linux monitor.example.com 2.6.32-279.22.1.el6.x86_64 #1 SMP Wed Feb 6 03:10:46 UTC 2013 x86_64 - OSSEC HIDS v2.7')
Trx read view will not see trx with id >= E402, sees < E400
------- TRX HAS BEEN WAITING 22 SEC FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 0 page no 597 n bits 72 index `hostname` of table `ossec`.`server` trx id E401 lock mode S waiting
Record lock, heap no 2 PHYSICAL RECORD: n_fields 2; compact format; info bits 0
0: len 27; hex 6d7973716c2d73727630312e7365637572657368616b652e636f6d; asc monitor.example.com;;
1: len 2; hex 0001; asc ;;
------------------
---TRANSACTION E400, ACTIVE 972 sec
8 lock struct(s), heap size 1248, 1 row lock(s), undo log entries 3146
MySQL thread id 2, OS thread handle 0x7f28f82a1700, query id 8349 localhost 127.0.0.1 ossecuser
Trx read view will not see trx with id >= E401, sees < E401

My concerns confirmed after watching the process list inside mysql console. After version 5.5 MySQL default storage engine is InnoDB, before it was MyISAM. I thought that inside the schema file the storage engine would be marked, but unfortunately there were only table definition. Next I decided to convert the engine to MyISAM for all the tables used by OSSEC for reporting. First connect to mysql with root credentials:

# mysql -h localhost -u root -p

Once connected, enter ossec database and change the storage engine:

mysql > USE ossec;
mysql > SHOW TABLES;
mysql > ALTER TABLE agent ENGINE=MYISAM;

Repeat the last command for all tables inside ossec database. The result can be seen with:

mysql > SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = 'ossec' AND ENGINE = 'MYISAM';

After confirmation that all of the tables use MyISAM engine restart the ossec server. Check again the ossec.log and you should have no issues regarding database connection.
If you read this before installation of OSSEC, It will be better to change the mysql.schema file. Just put the default storage engine into table definitions before applying the schema and they will be created with the correct configuration. The definition should look like:

CREATE TABLE category
(
cat_id SMALLINT UNSIGNED NOT NULL AUTO_INCREMENT,
cat_name VARCHAR(32) NOT NULL UNIQUE,
PRIMARY KEY (cat_id),
INDEX (cat_name)
) ENGINE = MYISAM;

This should be added to all table definitions. 
I hope someone will find this useful. I want to share a few additional resources about setting OSSEC and Analogi on CentOS:

And, of course if you consider OSSEC deployment in your infrastructure I highly recommend this book.

понеделник, 18 февруари 2013 г.

Routing issues with OpenWRT and OpenVPN

-->
Hi everybody, today I'm going to expand the OpenVPN task and solution. This post is an addition to the last one and all the configuration files are the same. I expect from you to have a configured openvpn client (TP-Link WDR 4300) with OpenWRT connected with OpenVPN server though a tunnel. So let's get it started.

The task

After setting the OpenVPN tunnel, add additional route to an IP address (not from the subnet on the client or on the server). All of the traffic to this outer IP address should be routed through the VPN tunnel.

The solution

This doesn't sound very complicated, but it turns out to be a little bit tricky. In openwrt, at least the version I'm using, setting additional route needs installation of the “ip” package. Here is a link to openwrt wiki for routing. As it is stated in the article first thing you got to do is to get “ip” tool for openwrt. This can be accomplished with the following commands inside openwrt command line:

# opkg update
# opkg install ip

After installing you can try adding a new route with the command:

# ip route add 88.88.88.88/32 via 192.168.2.1 dev tun0

*88.88.88.88 is an outer IP used only in the example for presentation purposes.
Test the new route with traceroute command:

# traceroute 88.88.88.88

Most probably this test will show you that the packet is routed through the VPN tunnel, but there will be no output after the VPN gateway (192.168.0.1). Then you can try to ping the same IP and you will get either “Port unreachable” error or no error at all.
The next thing you got to do is tweak the iptable rules to allow packets from everywhere to pass through your VPN tunnel. Open the file /etc/firewall.user and edit the last two lines, describe in my last post. At last, they should look like:

/usr/sbin/iptables -I FORWARD -i tun0 -d 10.16.5.0/24 -o br-lan -s 0.0.0.0/0 -j ACCEPT
/usr/sbin/iptables -I FORWARD -o tun0 -s 10.16.5.0/24 -i br-lan -d 0.0.0.0/0 -j ACCEPT

Restart the firewall, check again for the route and test again with traceroute and ping.

# /etc/init.d/firewall restart
# ip route show
# traceroute 88.88.88.88

Now you should have working connection with the outer IP through the VPN tunnel. Till now, everything is standard, but if you try to add this route to be executed at boot time of the router, you will get stuck. I tried a lot of configurations from the openwrt and no one seemed to work. Finally I try to add it when the tunnel is initialize with openvpn. This one worked like a charm.
You need to create the file /etc/openvpn/additional_routes.sh, which contains the following lines:
#!/bin/ash
#This script add additional routes after openvpn Initialization
/usr/sbin/ip route add 88.88.88.88/32 via 192.168.2.1 dev tun0

Next add the two additional lines to /etc/openvpn/openvpn.conf

script-security 2
up /etc/openvpn/additional_routes.sh

What will this cause? When the openvpn is started on the client (TP-Link router) it will add the new route through the tunnel. What is missing? As the way it is configured the openvpn would only create the route, it won't deleted it. In this scenario this is not needed, because the openvpn will be started with router and when the router is turn off everything will be gone. In your scenario if you stop and start the openvpn service on the client it is better to delete the route every time and add it at start.

When you are ready configuring just restart the openvpn service and if everything is ok restart the router. Hope this will be handy to someone of us.

понеделник, 28 януари 2013 г.

OpenVPN and TP-LINK WDR4300 with OpenWRT

Hello everybody, it's been a while since my last post, but I'm back and today I'll show you how to configure and use OpenVPN server and TP-Link WDR4300 as a client.

The task

You have two sites, one is the headquarter, where it is running OpenVPN server and the other site is the OpenVPN client. The client should be a hardware device (TP-Link WDR4300) and the LAN computers behind the OpenVPN client should reach all the machines in the OpenVPN server network. The same should be accomplished for the machines in the OpenVPN server, they should communicate with the computers behind the OpenVPN client. Both the client and the server should route the traffic from each other subnets. For easier deployment, OpenVPN client and server are installed on the gateways of their subnets. On the picture below you can see the scenario. This solution can be used for backup, file transfer or just administration on the remote site.

 

Used software: Client and server OpenVPN 2.2.2, OpenWRT - Attitude Adjustment RC1 

Solution

Let's begin with the OpenVPN client. At first I need a distribution, which will run some kind of Linux OS and there I will install the OpenVPN and configure it as a client.
The first distribution, which I tried, was DD-WRT. Unfortunately I couldn't find anywhere on the dd-wrt website information if TP-Link WDR4300 is supported or not. In the forum there is a post from some guy, who was successfully flashed it, but the procedure was terrible and if anything goes wrong you could end with bricked router. So I decide to try another one – OpenWRT. The information  about WDR4300 on the OpenWRT page is much more, there is even a dedicated page in the wiki.
The hardware version of the device is very important. If you didn't choose the right version for your TP-Link router you will end with bricked device. This mean, that the router can't be used anymore and you should buy a new one. There are instructions on the OpenWRT wiki page for WDR4300 how to de-brick it, but they are not easy to follow. So choose carefully! In my case, the device has a hardware version 1.2 and I go straight to Attitude Adjustment RC1 build.
The installation was very easy, I log into the WDR4300 default admin page and from the menu on the left choose System Tools → Firmware Upgrade. Select the file you just download from OpenWRT download page, cross your fingers :) and hit the Upgrade button.
If everything is OK, first you will see that the upgrade was successful and then the device is going to reboot. After the reboot, most probably you will see message that the page could not be delivered. Don't panic, by default the address of the WDR4300 router is at 192.168.0.1, after the flash it will be changed to 192.168.1.1. Log in, change the password and leave it aside.

Next we will talk about the certificates needed for secure communication between the server and the client. There are a few programs for creating certificates. The first one and the most popular is the openssl command. It's a great tool, very powerful, but you will need sometime to configure it and choose the right options. Another option is the easy-rsa scripts that comes with the OpenVPN, which is located inside /usr/share/openvpn/easy-rsa/. There are two graphical tools, which will do the job too – TinyCA and xCA. If you choose to generate the needed certificates with tool different from easy-rsa, look at here. The key usage and extended key usage of the certificates should have the values specify in the table.
I generated certificates twice, first with openssl program and I didn't have any troubles and a second time with a graphical tool. Actually with the graphical program I had more problems, than with the openssl. If you use graphical interface for creating certificates don't forget to specify the right options for key usage and extended key usage otherwise the openvpn won't start. If you get errors with openvpn like “unsupported certificate purpose” and you generate certificates with openssl check this blog.
 
Besides certificates to harden the security I created a HMAC signature. As it is written in the documentation openvpn will dropped any UDP packets not bearing the correct HMAC signature. This can protected you from DoS attacks, port scanning, even buffer overflow vulnerabilities in the SSL/TLS implementation. The HMAC signature is generated with the command:

$ openvpn --genkey --secret openvpn-hmac.key

One last thing before everything is ready – generate Diffie Hellman parameters. They are needed only for the server side. You can generate them either with easy-rsa script – build-dh or with openssl:

$ openssl dhparam -outform PEM -out openvpn-srv01-dh.pem 1024

Here are the working configuration files, first for the client:

client
proto tcp
remote openvpn-srv.example.com
port 1194

dev tun
nobind

ca /etc/openvpn/CA_cert.pem
cert /etc/openvpn/vpnclient01-cert.pem
key /etc/openvpn/vpnclient01-key.pem
tls-auth /etc/openvpn/openvpn-hmac.key 1
cipher AES-256-CBC

verb 4
tls-remote openvpn-srv.example.com

and then for the server: 

proto tcp
port 1194
dev tun
server 192.168.2.0 255.255.255.0

ca /etc/openvpn/CA_cert.pem
cert /etc/openvpn/openvpn-srv-cert.pem
key /etc/openvpn/openvpn-srv-key.pem
dh /etc/openvpn/openvpn-dh.pem
tls-auth /etc/openvpn/openvpn-hmac.key 0
cipher AES-256-CBC

persist-key
persist-tun
keepalive 10 60

script-security 2

push "route 10.16.2.0 255.255.255.0"
topology subnet

user nobody
group nobody
verb 4

client-to-client
client-config-dir /etc/openvpn/ccd/
route 10.16.5.0 255.255.255.0 192.168.2.1

log-append /var/log/openvpn.log
status /var/log/openvpn.status 

I placed the configuration files and certificates on both server and client inside /etc/openvpn

There a couple of things to mention about client and server configurations. Why did I choose the tcp protocol, instead of the default and recommended UDP? Unfortunately a router in the way between the openvpn client and the openvpn server is configure to drop UDP packets and because of that the communication between them failed. That's the reason I chose to use UDP, if you are lucky and don't have such a case go with UDP, it is recommended one.
The other interesting thing is the client configuration. Part of the problem is that I need a static IP of the VPN network for the client after it is connected to the openvpn server. This can be accomplished with client configuration directory. In this directory we create a file with name like the client certificate's common name. If you forget your client's certificate common name you can check it with the command: 

$ openssl x509 -subject -noout -in vpnclient01-cert.pem

So let's say the command give an output:
CN=openvpn-client.example.com
The file should be named “openvpn-client.example.net” and I placed it in /etc/openvpn/ccd.
The file should look like:
 
iroute 10.16.5.0 255.255.255.0
ifconfig-push 192.168.2.3 255.255.255.0 

The 10.16.5.0/24 is the subnet of the openvpn client and 192.168.2.3 is the IP address of the tun0 interface.
Now we are ready with the OpenVPN and it's time to finish openWRT configuration.
To set the IP address of the LAN go to the login of the router and when you log click the Network tab → Interfaces → LAN → Edit. After that from the Interface page edit the settings for configure WAN interface.
You can install openvpn software either from the web interface or from the command line. I prefer to install it from the command line, if there is a problem I can see it and debug it. To use the command line you should connect to the router with ssh. Once you have logged into openwrt, install openvpn with command:

# opkg update
# opkg install openvpn

More info on installing openvpn can be found here and here.

It's time to check if the configuration of the openvpn is working properly. On the openvpn server start the openvpn with the following command:

# openvpn --daemon --config /etc/openvpn/openvpn.conf

You can track the process of establishing connection with the client:

# tailf /var/log/openvpn.log

Next connect to the OpenVPN client (OpenWRT) over ssh and type:

# openvpn --config /etc/openvpn/openvpn.conf

You will see in both server and client bunch of messages and if everything is ok, finally you will see “Initialization Sequence Completed”. If something goes wrong you will see an error message, try to google it or post it in the comments.

When the connection is established, on the client command line type the following command and look for a device with name “tun0” and IP address “192.168.2.3”:

# ifconfig -a 

If “tun0” is listed and has the right address try to ping the server (192.168.2.1). Depending of your firewall configuration on the openvpn server, you will succeed, but you will have trouble pinging the 10.16.2.0/24 clients.

Before I clear this out, it's recommended to add this connection as an interface and create a firewall zone in OpenWRT. Adding interface is done through the OpenWRT web interface Network tab → Interfaces → Add. Named the interface with “VPN” or something and choose “tun0” device in “Cover the following interface”. In the field ”Protocol of the new interface” choose Static address. Then go to "Edit" and populate the IP address (192.168.2.3), the network mask and the gateway (192.168.2.1). Next click on the Advanced Settings tab and uncheck the “Bring up on boot” option.
Go to Firewall page under Network tab and click “Add”. Check all the chains within the new zone “allow” to pass the traffic, associate the new zone with the VPN interface and allow traffic to be forwarded from and to the LAN zone.
Now you can try again to ping machines in 10.16.2.0/24 network and most probably without result. To make this work add this 4 iptable rules to the /etc/firewall.user on the OpenWRT.

/usr/sbin/iptables -I INPUT -i tun0 -d 10.16.5.0/24 -s 10.16.2.0/24 -j ACCEPT
/usr/sbin/iptables -I OUTPUT -o tun0 -s 10.16.5.0/24 -d 10.16.2.0/24 -j ACCEPT
/usr/sbin/iptables -I FORWARD -i tun0 -d 10.16.5.0/24 -o br-lan -s 10.16.2.0/24 -j ACCEPT
/usr/sbin/iptables -I FORWARD -o tun0 -s 10.16.5.0/24 -i br-lan -d 10.16.2.0/24 -j ACCEPT


Reboot the device and try again to ping computers on 10.16.2.0/24 network, they should respond. The same should be true in the reverse direction.

I know it will take you sometime to read the whole instructions, don't be shy use the comments to ask question is something is not clear to you or if you have better ideas to make the whole thing work more easily or securely. In this case I use the TP-Link WDR4300 router, but should be the same on anything else that run the OpenWRT.
If you are interested in OpenVPN and want to learn more scenarios and check other options I highly recommend the OpenVPN 2 Cookbook. For more info about OpenVPN check the manual page.

неделя, 28 октомври 2012 г.

OpenLDAP 2.4 Configuration

Hello everybody, today we are going to create an OpenLDAP server for central authentication. I won't give you introduction to LDAP service, but show just a basic configuration. In the end you will have a working server without any objects and with one access rule.

I'm going to install openldap server on CentOS 6.3 as a binary package, not from the source. The current version of the openldap-servers package up to the day I'm writing this is 2.4.23.

# yum install openldap-servers

Since version 2.3 openldap has a new configuration engine and use LDIF files for configuring the server and the directory database. The new dynamic runtime configuration engine with LDIF files is a little bit tricky while you get used to it.

First you need a username and password for the slapd configuration database, later you can use it to add another database or module, or you can use it to extend the default schema. Next you need another admin user for configuration of the actual LDAP tree database. These two accounts are configured in different files.

For the configuration database you should add these two lines in /etc/openldap/slapd.d/cn\=config/olcDatabase\=\{0\}config.ldif

olcRootDN: cn=root,cn=config
olcRootPW: {SSHA}rtCwogFExs+w4sE2Mp1RtNnio2SWgijY

The olcRootDN contains the administrator distinguish name (the first common name could be admin or manager, whatever you like, but the second one should be config ex. cn=admin,cn=config) and olcRootPW attribute contains the password. The password is not in plain text, but it is a hash (SHA) and to the hash is added a salt, known only by the slapd server. This is the most secured option and if even someone reads the file it is almost impossible to get the original password. You can generate a password hash with slappasswd utility. Copy the output to the olcRootPW attribute.

# slappasswd

Notice: You can insert your password directly in a plain text in the configuration file, without using the slappasswd program. This is not recommended from security point. The password should look like this:
olcRootPW: {CLEARTEXT}MyPassword

Every LDAP tree has an admin account for adding objects to the tree and changing object's attributes. This is the first object you should create, when you configuring the directory tree. Openldap comes with preinstalled database and its file in CentOS is in /etc/openldap/slapd.d/cn\=config/olcDatabase\=\{2\}bdb.ldif
The olcRootDN attribute is already populated by default. You can keep the common name, but you should change the domain component to reflect your organization structure. Use again the slappasswd for generating password hash and add the olcRootPW attribute with the output of slappasswd command.

olcRootDN: cn=admin,dc=mycompany,dc=com
olcRootPW: {SSHA}rtCwogFExs+w4sE2Mp1RtNnio2SWgijY

Note: Do not copy paste directly the olcRootPW attribute from here!

In the etc/openldap/slapd.d/cn\=config/olcDatabase\=\{2\}bdb.ldif you should change the olcSuffix attribute too, so it represents your organization structure. The default configuration of the directory tree doesn`t include any access rules. I`ll give a rule I think is mandatory for every directory, but it's up to you to add more rules, that suits your organization needs:

olcAccess: to * by self write by anonymous auth by users read

The order of the rules is very important, because slapd stops at the first rule that matches the entry and/or attribute. The corresponding access directive is the one slapd will use to evaluate access. If there is no match access is deny. More information and details about configuring ACLs can be found in the OpenLDAP Administartion Guide.

Before starting the slapd daemon you can check for errors with slaptest command. On CentOS the files inside /var/lib/ldap are owned by root, so you should change that:
# chown ldap.ldap /var/lib/ldap/*

And finally start the daemon with :
# /etc/init.d/slapd start

If everything is ok check if the daemon listens with the command:
#netstat -patune | grep slapd

By default slapd messages are not logged and if you want to know what happens to the daemon you should add the following line to /etc/rsyslog.conf

#Log file for slap daemon
local4.* /var/log/slapd.log

One more thing to do to get slapd log files – inside /etc/openldap/slapd.d/cn\=config.ldif add the following line.

olcLogLevel: 0x20 0x40 0x80 0x100

These log levels give information about search filter pocessing (0x20), configuration file processing (0x40), access control list processing (0x80) and stats log connections/operations/results (0x100). You can add even more, all log levels are well documented in manual pages of slapd-config. Restart both rsyslog and slapd for changes to take effect.

# /etc/init.d/rsync restart
# /etc/init.d/slapd restart

More information can be found inside OpenLDAP Administration guide http://www.openldap.org/doc/, full reference of configuration attributes can be found inside manual pages of slapd-config.
I highly recommend the book Mastering OpenLDAP: Configuring, Securing and Integrating Directory Services for more configuration options and information about LDAP protocol and OpenLDAP server. The only downside is, that it is written for the old configuration syntax with slapd.conf and configuration examples must be rebuild to suit the new one.    
 

събота, 29 септември 2012 г.

Apache: Redirect HTTP requests to HTTPS

Today I want to discuss a problem about a redirecting HTTP web page to HTTPS secure location. My scenario is a web server which by default is working only on HTTPS, to be able to redirect an HTTP request to HTTPS one. In the end, when you contact the web server either with HTTP or HTTPS protocol, you connect only through HTTPS protocol for secure communication. I use apache web server.  
There are two possible solutions - one with mod_rewrite, which according to apache wiki is not the recommended method (ApacheWiki mod_rewrite) and the other one, which I`m after, is using the "Redirect" directive (ApacheWiki RedirectSSL). I don`t use .htaccess files in my scenario, so the solution should be in tweaking the httpd.conf.
In this case, I`ll use CentOS 6 server with apache 2.2, installed from the binary package that comes with the distribution. The SSL configuration files can be found under the directory /etc/httpd/conf.d/ssl.conf, and the default configuration file httpd.conf is inside /etc/httpd/conf directory. I assume you have already created, or buy a SSL certificate and you can contact your server at address https://www.mydomain.com.
So first you should create a new VirtualHost configuration, which should listen and accept requests through HTTP, port 80. I choose to add this definition at the end of my httpd.conf, instead of creating a new file. That`s because it will be only couple of lines and it`s idea is only to redirect traffic to secured location. I used the example provided in apache wiki:

NameVirtualHost *:80
<VirtualHost *:80>
   ServerName www.example.com
   Redirect permanent / https://www.example.com/
</VirtualHost>

After you inserted it in the end of the httpd.conf and save it, check for errors the configuration with:

# apachectl configtest

Now you have added the VirtualHost configuration, but if you restart apache you will see that nothing happens, even I found that my server doesn`t listen on port 80:

# netstat -patune | grep "80"

So I should instruct apache to listen at port 80 with Listen directive, but what will happen to the SSL configuration, there is a "Listen 443" line inside /etc/httpd/conf.d/ssl.conf. Actually the server continues to listen to port 443, because of the default ssl VirtualHost configuration. In addition I add the SSLRequireSSL and SSLOptions directives to my website root (as suggested here) in /etc/httpd/conf/httpd.conf:

.....
Listen 80
.....
<Directory "/var/www/html/">
.....
SSLRequireSSL
SSLOptions +StrictRequire
.....
</Directory>

.....

More info about these directives can be found here.
Check again for errors after the changes and if "Syntax OK" restart the apache:

 # apachectl configtest
 # service apache restart


One more thing not to forget is to check your firewall and open port 80, in CentOS the easiest way is with the command:

 # system-config-firewall-tui

Note: In order the command to work you should install the package with the same name.