Categories
Uncategorized

Creating a new LDAP server with FreeIPA and configure to allow vSphere authentication

Was setting up a new FreeIPA sever for my homelab and found out that the default configuration in FreeIPA does not allow you to use VMware vSphere as a client as not being fully RFC4519 and missing some other LDAP class settings.

Lets go through the steps of setting up a new FreeIPA server. We are going to use the official ansible repositories and collection for this purpose.

For this article we have the following assumptions:

  • Ansible host in the same subnet with the server that needs to be set up with FreeIPA.
  • ssh connectivity without password (ssh key) to FreeIPA server
  • FreeIPA server with CentOS 7 at freeipa.cloudalbania.com with minimum 1 Gb memory and 8Gb disk space
  • you already have vCenter up and running

Preparing the Ansible host and FreeIPA repository

We are going to use the official ansible repository to install FreeIPA. On a host with ansible 2.9+ issue the following commands to install and setup initial FreeIPA server

Prepare the git repo and the inventory file
$ git clone https://github.com/freeipa/ansible-freeipa.git
$ cd ansible-freeipa
$ echo << EOF > inventory/my-freeipa-server
[ipaserver]
freeipa.cloudalbania.com

[ipaserver:vars]
ipaserver_domain=cloudalbania.com
ipaserver_realm=CLOUDALBANIA.COM
ipaadmin_password=<STRONG PASS>
ipadm_password=<STRONG PASS>
EOF
 
Install the ansible collections for freeIPA:

ansible-galaxy collection install freeipa.ansible_freeipa -p ./
Customize the ansible.cfg file:
$ cat ansible.cfg
[defaults]
host_key_checking = False
deprecation_warnings=False
collections_paths = ./
roles_path = ./roles
nocows=1

Installing FreeIPA

On the same directory of the ansible repo run the following to install the FreeIPA server:
$ ansible-playbook -u root -i inventory/my-freeipa-server playbooks/install-server.yml
After 3-4 minutes the server should be up and running
Check the installation on the server with the ipactl status command:

Finally login to your server at https://freeipa.cloudalbania.com with user admin@cloudalbania.com and the password we set in the ansible inventory

Main login screen

Main screen after login

Configure FreeIPA for RFC4519 and vSphere

The next steps are following as per this FreeIPA article to customize the directory schema for vSphere authentication.
$ echo << EOF > vsphere_usermod.ldif
dn: cn=users,cn=Schema Compatibility,cn=plugins,cn=config
changetype: modify
add: schema-compat-entry-attribute
schema-compat-entry-attribute: objectclass=inetOrgPerson
-
add: schema-compat-entry-attribute
schema-compat-entry-attribute: sn=%{sn}
-
EOF
$ echo << EOF > vsphere_groupmod.ldif
dn: cn=groups,cn=Schema Compatibility,cn=plugins,cn=config
changetype: modify
add: schema-compat-entry-attribute
schema-compat-entry-attribute: objectclass=groupOfUniqueNames
-
add: schema-compat-entry-attribute
schema-compat-entry-attribute: uniqueMember=%mregsub("%{member}","^(.*)accounts(.*)","%1compat%2")
-
EOF

Apply then with the following

$ ldapmodify -x -D "cn=Directory Manager" -f vsphere_usermod.ldif -W
and this
$ ldapmodify -x -D "cn=Directory Manager" -f vsphere_groupmod.ldif -W
Run following commands as admin to allow the new sn attribute for compat users and uniqueMember for compat groups:
# ipa permission-mod "System: Read User Compat Tree" --includedattrs sn
# ipa permission-mod "System: Read Group Compat Tree" --includedattrs uniquemember
In case you have and error running the above commands then issue from the console the following command to authenticate first:
$ kinit admin

Initial configuration for FreeIPA

At this point we need to create at least three resources in FreeIPA:
  1. A bind user that will be used to bind to the LDAP server, we are using bind-user@cloudalbania.com
  2. An end user, in this case bzanaj@cloudalbania.com
  3. Two LDAP groups that will be used to add our users to vcsa-admins and vcsa-readonly.
We are doing this in order to not add individual users permissions and rather manage permissions in our LDAP server.
Users in FreeIPA:
Groups:

Then add the users to the groups:

Configure vSphere Authentication for FreeIPA

In the vSphere GUI go in Admistration -> Single Sign On -> Configuration -> Identity Providers and then Add.

In the next screen enter the following details as shown in the screenshot below:

Note: I am not using a certificate to authenticate on the LDAP server as it is out of the scope of this article.

After you save this configuration and there are no errors then you can assign the groups in the

Permissions settings in Access Control

At the end we should see the following:

Categories
Uncategorized

Offline generation of Let’s Encrypt certificates

Sometimes we need to get a Let’s Encrypt SSL certificate for a system that might not be connected in the internet or where the certbot client is not able to be installed. There is an easy way to generate a SSL chain that we can use in our internal applications.

Install certbot

On a linux system (even a temporary one) install certbot. The example below is performed on a Ubuntu 18.04 box.

$ sudo apt install certbot
$ certbot –version
certbot 0.27.0

We are going to generate a certificate for our host with a host name hostname.domain.com. Let’s encrypt will allow an offline update through a DNS challenge so that means that during the certificate generation you should have an open screen of you DNS registrars/manager.

Initiate the request by the following command

$ sudo certbot certonly –manual –preferred-challenges dns -d hostname.domain.com
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator manual, Installer None
Obtaining a new certificate
Performing the following challenges:
dns-01 challenge for 
hostname.domain.com

– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –
NOTE: The IP of this machine will be publicly logged as having requested this
certificate. If you’re running certbot in manual mode on a machine that is not
your server, please ensure you’re okay with that.

Are you OK with your IP being logged?
– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –
(Y)es/(N)o: Y

– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –
Please deploy a DNS TXT record under the name
_acme-challenge.hostname.domain.com with the following value:

VUiRL_FOsDDlOFGYVhZCsIHVtfJ03usFLxkPfVvmOos

Before continuing, verify the record is deployed.

– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –

Press Enter to Continue

Waiting for verification…

Now it is time to add the TXT record on your DNS server. As soon the record is there and you click enter the following will continue on your terminal

Cleaning up challenges

IMPORTANT NOTES:
– Congratulations! Your certificate and chain have been saved at:
/etc/letsencrypt/live/hostname.domain.com/fullchain.pem
Your key file has been saved at:
/etc/letsencrypt/live/
hostname.domain.com/privkey.pem
Your cert will expire on 2021-07-11. To obtain a new or tweaked
version of this certificate in the future, simply run certbot
again. To non-interactively renew *all* of your certificates, run
“certbot renew”
– If you like Certbot, please consider supporting our work by:

Donating to ISRG / Let’s Encrypt: https://letsencrypt.org/donate
Donating to EFF: https://eff.org/donate-le

An there you go the new private key and the certificate chain are in the default letsencrypt location at:
/etc/letsencrypt/live/hostname.domain.com/fullchain.pem
/etc/letsencrypt/live/hostname.domain.com/privkey.pem

In case you already have a CSR file from a device or server then just add the –csr to the above command with the csr file as argument:

$ sudo certbot certonly –manual –preferred-challenges dns -d hostname.domain.com –csr <csr_file.csr>
Categories
Uncategorized

Set up Foreman and manage it with Ansible

Managing Foreman recently and got bored to configure it each time I set it up from scratch.

This blog post will cover initial foreman install on a CentOS 7 server and then manage it with ansible through the foreman ansible collections.

The repository used in this article is locate here.

Servers recommendations

Minimum Foreman server hardware recommendations to support CentOS 7 & 8.

  • CentOS 7
  • 4 CPUs
  • 8Gb RAM
  • 100 Gb HDD

Minimum ansible server recommendations:

  • CentOS 7
  • 1 CPU
  • 256 Mb RAM
  • 8 Gb HDD

Setting up the Foreman server

Configure the OS

Create a CentOS 7 server with the above hardware setting and make sure to have a working DNS for that server or edit its own /etc/hosts with that hostname. For simplicity I am using foreman.cloudalbania.com -> 192.168.0.180.
 
$ cat /etc/hosts
...
192.168.0.180    foreman.cloudalbania.com    foreman
 
Reboot the server and make sure the new hostname is set.

Install Katello & Foreman

Next step is to install the foreman application with katello content management.
This is a pretty straightforward step:
 
Install repositories
yum -y localinstall https://yum.theforeman.org/releases/1.24/el7/x86_64/foreman-release.rpm
yum -y localinstall https://fedorapeople.org/groups/katello/releases/yum/3.14/katello/el7/x86_64/katello-repos-latest.rpm
yum -y localinstall https://yum.puppet.com/puppet6-release-el-7.noarch.rpm
yum -y localinstall https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
yum -y install foreman-release-scl
 
Then update the OS and restart if any kernel or glibc upgrade
yum -y update
 
Install katello packages to prepare for the installation step later. This might take some time
yum -y install katello
 
Finally install foreman with katello. change the variables accordingly:
$ foreman-installer –scenario katello –foreman-initial-organization ‘CloudAlbania’ –foreman-initial-location ‘YYZ’ –foreman-initial-admin-username ‘admin’ –foreman-initial-admin-password ‘password123’ –foreman-foreman-url ‘https://foreman.cloudalbania.com’ -v
 
After 10-15 minutes the server should be up and running and reachable from your browser
 

Install and configure the ansible server

To manage the Foreman server you can already do all the configurations in the GUI. If you need to have a more documented and automated configuration then Ansible is the way.
 
In this guide I am using a CentOS 7 server.
 
$ yum -y install epel-release
$ yum -y install ansible git
 
Make sure the ansible host can connect to the foreman server without a password the sake of this guide. You can implement vaults or sudo users in a production environment for better security
 
Create your ansible workplace:
$ mkdir -p git/foreman
$ cd git /foreman
 
Install the foreman.ansible collections:
$ ansible-galaxy collection install theforeman.foreman
 
Install the ansible dependencies with pip:
$ pip install subnet ipaddress rpm deb apypie PyYAML
 
Now your ansible server should be ready to configure the foreman server.

Collections Usage

Full documentation for each individual module can be obtained with the ansible-doc command as follows:
 
ansible-doc theforeman.foreman.foreman_architecture
 

Your first playbook

In order to start configuring the Foreman server we can start with a Day 1 configuration item which is the Organization name.
NOTE: The Foreman collections do not need to connect to the foreman server itself rather we will use the local connection and then ansible will reach to foreman on port 443 (the API).
 
The first playbook we can start with is the definition of the Organization itself. Even though we have defined it in the setup command above, it is a good practice to have it defined in a configuration management system for consistency.
 
The playbook content:
[root@ansible foreman]# cat foreman1.yml
– name: Day 1
  hosts: localhost
  tasks:
    – name: “Create CI Organization”
      theforeman.foreman.organization:
        username: “admin”
        password: “password123”
        server_url: “https://foreman.cloudalbania.com”
        name: “{{ item }}”
        state: present
        validate_certs: no
      loop:
        – “CloudAlbania”
        – “Organization 2”
 
And when run we will see the below:
 
 
 
 
 
and in the foreman organizations list we will see:
 
If we run again the playbook there will be no changes
 
This is the end of this guide. Will follow up with a more detailed Day 1 configurations
Categories
Uncategorized

Compile latest OpenSSL for CentOS 5

Sometimes we are still managing very old hardware or OS versions and that we have nothing in our hands to change it till it phases out.

Here is an example from CentOS 5 where the  latest OpenSSL package is 0.9.8e and of course it does not support TLS 1.2

Let’s start with the download and uncompressing the OpenSSL package. The latest package supporting CentOS at the time of this writing from the OpenSSL webpage is 1.0.2o.

$ cd /usr/src/
$ curl -O -L https://www.openssl.org/source/openssl-1.0.2o.tar.gz
$ tar zxvf openssl-1.0.2o.tar.gz
$ cd openssl-1.0.2o

Some libraries that we need in our system in order for a successful compilation of the package are below. allow also the dependencies to be installed in this process such as kernel-headers, cpp, cvs etc.

$ yum install expat-devel gettext-devel zlib-devel gcc autoconf gcc libtool perl-core zlib-devel

Now it’s time to configure and compile OpenSSL. It is worth to run the tests to see if there are any unexpected errors.

$ ./config –prefix=/usr/local/openssl –openssldir=/usr/local/openssl shared zlib
$ make
$ make test

prefix and openssldir sets the output paths for OpenSSL. shared will force crating shared libraries and zlib means that compression will be performed by using zlib library

In order to install the libraries and the new binary you need to execute:

$ make install


OpenSSL shared libraries have been installed in:
  /usr/local/openssl

Sources of OpenSSL are required to compile other tools such us curl, Apache, Nginx etc., so I don’t remove them.

To test the new binary you can initially check the version and then try the connectivity with a TLS1.2 only website such as GitHub:

/usr/local/openssl/bin/openssl version
OpenSSL 1.0.2o  27 Mar 2018


$ /usr/local/openssl/bin/openssl s_client -connect github.com:443 -tls1_2 | grep Protocol
    Protocol  : TLSv1.2


Success!

Add new version to PATH

After the installation you will probably want to check the version of OpenSSL but it will print out old version. Why? Because it’s also installed on your server. I rarely override packages installed via yum. The reason is that when there is new version of OpenSSL and you will install it via yum, it will simply override compiled version, and you will have to recompile it again.

Instead of overriding files I personally like to create new profile entry and force the system to use compiled version of OpenSSL.

In order to do that, create following file:

$ vi /etc/profile.d/openssl.sh
and paste there following content:

$ cat /etc/profile.d/openssl.sh
pathmunge /usr/local/openssl/bin

Save the file and reload your shell, for instance log out and log in again. Then you can check the version of your OpenSSL client. If you have errors loading shared libraries continue reading

Link libraries

In order to fix the problem with loading shared libraries we need to create an entry in ldconfig.

Create following file:

$ vi /etc/ld.so.conf.d/openssl-1.0.2o.conf

And paste the following contents:

$ cat /etc/ld.so.conf.d/openssl-1.0.2o.conf
/usr/local/openssl/lib

We simply told the dynamic linker to include new libraries. After creating the file you need to reload linker by using following command:

$ ldconfig -v

Check the version of your OpenSSL now. It should print out
OpenSSL 1.0.2o  27 Mar 2018

Curl

In order to have full HTTPS functionality in most cases we need to use curl to access HTTP data. The default curl version in CentOS 5 of course is compiled to use an outdated version of OpenSSL so we need to recompile a new version supporting the latest OpenSSL version we just compiled


$ cd /usr/src/
$ curl -O -L https://curl.haxx.se/download/curl-7.59.0.tar.gz
$ tar zxvf curl-7.59.0.tar.gz
$ cd curl-7.59.0
$ ./configure –prefix=/usr/local/curl –with-ssl=/usr/local/openssl –enable-http –enable-ftp LDFLAGS=-L/usr/local/opensssl/lib CPPFLAGS=-I/usr/local/openssl/include
$ make

$ make install
$ libtool –finish /usr/local/curl –with-ssl=/usr/local/openssl/lib

and that will do your curl compile process to allow browsing sites with latest TLS1.2 encryption crypto.

Categories
Uncategorized

Mount iSCSI target to your Virtualbox VMs

Spent some time on this mounting an iSCSI LUN from a NetApp volume.
In your NetApp SVM enter the new initiator with a default name according to the RFQ:

iqn.2018-03.freebsd.com:freebsd





After creating the netapp svm, lun, lif, etc you can mount the new iSCSI volume with the following command.


PS C:Program FilesOracleVirtualBox> 
.VBoxManage.exe storageattach freebsd –storagectl “SATA” –port 0 –type hdd –medium iscsi –server 192.168.0.162 –target “iqn.1992-08.com.netapp:sn.944d91542e4811e8b5b800505600c301:vs.4” –tport 3260 –initiator “iqn.2018-03.freebsd.com:freebsd”


Here is an screenshot from the Virtual Media Managet (Ctrl-D) in Virtualbox

Now you can go on and install your preferred OS.

Categories
Uncategorized

Set up a Powershell dev environment

For us sysadmins it is important to create scripts on the fly and make things work. By working in an ad-hoc way, our scripts will get lost, not cured and especially not searchable.

Initial tools setup

To better organize our code some personal best practices follow:
Environment: Windows
Install and download TortoiseGit and of course GIT. While installing GIT just select the defaults for you.

Saving code

For every single script it is a better idea to integrate or put all its relative files in a separate folder. Then put all these folders in a single one named: scripts.

After installing TortoiseGIT it is a good idea to go with the First Start wizard.
Select the defaults and in the “configure user information” screen input your name and a valid email address.

The next thing to do is to integrate all your scripts in a GIT repo. With the help of TortoiseGIT this can be easily achieved with only the right click of the mouse

And you are done with the repository creation.

After each working sessions it is a good idea to commit your work. With TortoiseGIT again this can be easily done by a mouse right-click.

Then fill out the commit screen with an appropriate message and then click commit

The commit confirmation screen will show up.

Do the same steps (commit) after each coding session to save your work and your coding history.

Pushing code remotely

After working locally, you might consider storing your code in a server repo. A good one is at GitLab which offers private repos for free.

Categories
Uncategorized

How to Install ReportServer on CentOS 7

ReportServer is a free and open source business intelligence (OSBI) platform with powerful reporting and analysis tools. It gathers data from multiple business touch points and generates different reports from the data. It provides a responsive and unified interface to display the data to the user. It provides powerful ad hoc reporting capabilities and integrates Jasper and Eclipse BIRT in one unified environment.
In this tutorial, we will install ReportServer on CentOS 7 server.
Prerequisite
  • Minimal CentOS 7 server
  • Root privileges

Install ReportServer

Before installing any package it is recommended that you update the packages and repository using the following command.
yum -y update

Install JAVA

Once your system is updated, we will install the latest version of Oracle Java into the server. Run the following command to download the RPM package.
wget –no-cookies –no-check-certificate –header “Cookie:oraclelicense=accept-securebackup-cookie” “http://download.oracle.com/otn-pub/java/jdk/8u131-b11/d54c1d3a095b4ff2b6607d096fa80163/jdk-8u131-linux-x64.rpm”
If you do not have wget installed, you can run the yum -y install wget to install wget. Now install the downloaded RPM using the following command.
yum -y localinstall jdk-8u131-linux-x64.rpm
You can now check the Java version using the following command.
java -version
You will get the following output.
[root@liptan-pc ~]# java -version
java version "1.8.0_131"
Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
You will also need to check if JAVA_HOME environment variable is set. Run the following command for same.
echo $JAVA_HOME
If you get a null or blank output, you will need to manually set the JAVA_HOME variable. Edit the .bash_profile file using your favourite editor. In this tutorial, we will use nano editor. Run the following command to edit .bash_profile using nano.
nano ~/.bash_profile
Now add the following lines at the at the end of the file.
export JAVA_HOME=/usr/java/jdk1.8.0_131/
export JRE_HOME=/usr/java/jdk1.8.0_131/jre
Now source the file using the following command.
source ~/.bash_profile
Now you can run the echo $JAVA_HOME command again to check if the environment variable is set or not.
[root@liptan-pc ~]# echo $JAVA_HOME 
/usr/java/jdk1.8.0_131/

Install Tomcat Server

Once JAVA is installed, you will need to install Tomcat server. Tomcat is an application server for JAVA applications. Run the following command to create tomcat user and group.
groupadd tomcat
The above command will create a group named tomcat.
useradd -M -s /bin/nologin -g tomcat -d /opt/tomcat tomcat
The above command will create a user tomcat having no login shell and home directory as /opt/tomcat.
Now download the Tomcat archive from Tomcat download page using the following command.
cd ~
wget http://www-us.apache.org/dist/tomcat/tomcat-8/v8.5.15/bin/apache-tomcat-8.5.15.tar.gz
Now we will install the tomcat server in /opt/tomcat directory. Create a new directory and extract the archive using the following command.
mkdir /opt/tomcat
tar xvf apache-tomcat-8*tar.gz -C /opt/tomcat –strip-components=1
Now provide the ownership of the files to tomcat user and group using the following command.
chown -R tomcat:tomcat /opt/tomcat

Install PostgreSQL

Now that we have Tomcat set up, you can proceed to install PostgreSQL database server. Run the following command to install PostgreSQL.
yum -y install postgresql-server postgresql-contrib
Now initialize the database using the following command.
postgresql-setup initdb
Start and enable PostgreSQL database service using the following command.
systemctl start postgresql
systemctl enable postgresql
Now run the following command to change the password of PostgreSQL root user called postgres using the following command.
sudo -u postgres psql postgres
password postgres
Enter q or <kbd>ctrl + D</kbd> buttons to exit Postgres shell.
Now run the following command to create a new database for ReportServer database reportserver.
sudo -u postgres createdb reportserver
Now run the following command to create a new user for ReportServer database.
sudo -u postgres createuser -P -s -e reportserver
You will need to enter the password twice. You should get the following output.
[root@liptan-pc ~]# sudo -u postgres  createuser -P -s -e reportserver
Enter password for new role:
Enter it again:
CREATE ROLE reportserver PASSWORD 'md5171d269772c6fa27e2d02d9e13f0538b' SUPERUSER CREATEDB CREATEROLE INHERIT LOGIN;
Now assign the database user to the database using following command.
sudo -u postgres psql
GRANT ALL PRIVILEGES ON DATABASE reportserver TO reportserver;
exit the shell using q.
Now you will need to edit a PostgreSQL configuration file so that the database can be connected without the postgres user. Edit the pg_hba.conf using any editor.
nano /var/lib/pgsql/data/pg_hba.conf
Find the following lines and change peer to trust and idnet to md5.
# TYPE  DATABASE        USER            ADDRESS                 METHOD

# "local" is for Unix domain socket connections only
local   all             all                                     peer
# IPv4 local connections:
host    all             all             127.0.0.1/32            ident
# IPv6 local connections:
host    all             all             ::1/128                 ident

Once updated, the configuration should look like shown below.
# TYPE  DATABASE        USER            ADDRESS                 METHOD

# "local" is for Unix domain socket connections only
local   all             all                                     trust
# IPv4 local connections:
host    all             all             127.0.0.1/32            md5
# IPv6 local connections:
host    all             all             ::1/128                 md5
Now restart PostgreSQL server using the following command.
systemctl restart postgresql

Install ReportServer

Now that we have both Tomcat and PostgreSQL setup, we can download and setup ReportServer. Run the following command to download ReportServer using following command.
wget https://downloads.sourceforge.net/project/dw-rs/bin/3.0/RS3.0.2-5855-2016-05-29-17-55-24-reportserver-ce.zip -O reportserver.zip
You can always find the link to the latest version using the following link.
Now remove everything in the web ROOT folder of Tomcat installation using the following command.
rm -rf /opt/tomcat/webapps/ROOT/*
Now extract the ReportServer archive using the following command.
unzip reportserver.zip -d /opt/tomcat/webapps/ROOT/
Now copy the configuration file from the example files using the following command.
cp /opt/tomcat/webapps/ROOT/WEB-INF/classes/persistence.properties.example /opt/tomcat/webapps/ROOT/WEB-INF/classes/persistence.properties
Now open the persistence.properties file and provide the database information which we have created earlier.
nano /opt/tomcat/webapps/ROOT/WEB-INF/classes/persistence.properties
Now add the following lines at the end of the file.
hibernate.connection.username=reportserver
hibernate.connection.password=StrongPassword
hibernate.dialect=net.datenwerke.rs.utils.hibernate.PostgreSQLDialect
hibernate.connection.driver_class=org.postgresql.Driver
hibernate.connection.url=jdbc:postgresql://localhost/reportserver
Change the username, password and database name according to the database set created by you.
Now provide the necessary ownership using the following command.
chown -R tomcat:tomcat /opt/tomcat/webapps/ROOT/
Now initialize the ReportServer database using the following command.
psql -U reportserver -d reportserver -a -f /opt/tomcat/webapps/ROOT/ddl/reportserver-RS3.0.2-5855-schema-PostgreSQL_CREATE.sql
It will ask you the password of your database user, provide the password and it will run the DDL script to initialize the database.
Finally, you will need to create a Systemd script to run tomcat server.
Create a new Systemd file using the following command.
nano /etc/systemd/system/tomcat.service
Copy and paste the following content into the file.
[Unit]
Description=Apache Tomcat Web Application Container
After=syslog.target network.target

[Service]
Type=forking

Environment=JRE_HOME=/usr/java/jdk1.8.0_131/jre
Environment=CATALINA_HOME=/opt/tomcat
Environment=CATALINA_BASE=/opt/tomcat
Environment='JAVA_OPTS="-Djava.awt.headless=true -Xmx2g  -XX:+UseConcMarkSweepGC -Dfile.encoding=UTF8 -Drs.configdir=/opt/reportserver"'

ExecStart=/opt/tomcat/bin/startup.sh
ExecStop=/opt/tomcat/bin/shutdown.sh

User=tomcat
Group=tomcat
UMask=0007
RestartSec=10
Restart=always

[Install]
WantedBy=multi-user.target
Now you can start the application using the following command.
systemctl start tomcat
To enable Tomcat service to automatically start at boot time, run the following command.
systemctl enable tomcat
To check if the service is running, run the following command.
systemctl status tomcat
If the service is running, you should get the following output.
[root@liptan-pc reportserver]# systemctl status tomcat
? tomcat.service - Apache Tomcat Web Application Container
   Loaded: loaded (/etc/systemd/system/tomcat.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2017-06-07 15:00:32 UTC; 4min 41s ago
 Main PID: 13179 (java)
   CGroup: /system.slice/tomcat.service
           ??13179 /usr/java/jdk1.8.0_131/jre/bin/java -Djava.util.logging.config.file=/opt/tomcat/conf/logging.propert...

Jun 07 15:00:32 liptan-pc systemd[1]: Starting Apache Tomcat Web Application Container...
Jun 07 15:00:32 liptan-pc systemd[1]: Started Apache Tomcat Web Application Container.
You can now access your application on the following URL.
http://your-server-ip:8080
You will see the following login interface.
ReportServer Login
You can now log in to your website using the username root and password root. Once you are logged in, you will see your default dashboard.
ReportServer Dashboard
On the dashboard, you can add the tools and widgets according to your choice. You can access TeamSpace by clicking on TeamSpace link from the top bar.
TeamSpace
You can configure scheduled reporting from Scheduler menu. You can access Scheduler by clicking Scheduler link from top bar.
Report Scheduler
To change the password and access administration dashboard, click on Administration link from the top menu.
Change password in ReportServer

Conclusion

In this tutorial, we learned how to install ReportServer on CentOS 7. You can now use the application to analyse and generate different reports for your firm.
Categories
Uncategorized

Get your public IP address from Ripe NCC

If you are using PowerShell and in the console or the scripts you need to have your public IP address then Ripe NCC can really help you get this.

You can query RIPE stat servers and receive your IP address as JSON and save it in a variable to use it later.

To get the JSON data just excecute this:

C:Windowssystem32> $a = (Invoke-WebRequest -Uri “https://stat.ripe.net/data/whats-my-ip/data.json” | ConvertFrom-Json) 


and you should have this output:

PS C:Windowssystem32> $a

status           : ok
server_id        : stat-app15
status_code      : 200
version          : 0.1
cached           : False
see_also         : {}
time             : 2017-06-18T21:49:27.221689
messages         : {}
data_call_status : supported
process_time     : 24
build_version    : 2017.6.15.213
query_id         : 000c524e-5470-11e7-8856-00505688b546
data             : @{ip=123.123.123.123}


If you just need the IP address then filter out only the IP address object like this:
$ip_address = (Invoke-WebRequest -Uri “https://stat.ripe.net/data/whats-my-ip/data.json” | ConvertFrom-Json).data.ip


output:
PS C:Windowssystem32> $ip_address
123.123.123.123





Categories
Uncategorized

Build open source clouds with 4 OpenStack guides and tutorials

Every time you turn around, it seems like there’s a new open source project which might be of value to a cloud administrator. A huge number of these projects fall under the umbrella of OpenStack, the open source cloud toolkit.
Fortunately, there are plenty of tools out there to help with growing your OpenStack knowledge base, from meetups and in-person training, to mailing lists and IRC channels, to books, websites, and the official documentation.
Adding to that list are many individual members of the OpenStack community who are sharing their own tutorials, guides, and other helpful information across their own blogs and community sites. In order to help you keep up with these, every month Opensource.com takes a look a the latest community-created educational content for OpenStackers and brings it to you here.


OPENSTACK: OPEN SOURCE CLOUD SOFTWARE
  • One of the more interesting aspects of OpenStack is that it really is a composable toolkit of different projects which are designed to be used in conjunction with one another but which can provide value to other projects outside of OpenStack itself. One great example of that are OpenStack’s storage projects, which can be used independently of OpenStack or swapped out within an OpenStack cloud. Recently, John Griffith provided a great tutorial for how OpenStack’s Cinder block storage project can be used with Docker and Linux container systems.
  • One of the challenges that comes up in having so many different interchangeable parts, particularly with storage components, is knowing how to choose the right one for your needs and the needs of your cloud’s users. Learn all about the various factors that are important to consider in this guide to selecting a storage backend for OpenStack.
  • Mistral provides a workflow service within OpenStack, which the TripleO project recently adopted in the most recent release cycle. Like any cloud project, the team encountered a few unexpected hiccups along the way, and documented them in this look at debugging Mistral in TripleO.
  • One challenge of a large project like OpenStack with a diversity of contributors, often working on pseudo-independent projects is that the code base can reflect a variety of different coding styles and bring ambiguities related to uncertainties in the code. Various automated tools can help to reign this in; one such tool, Eslint, is specifically oriented towards JavaScript code. Learn how to implement Eslint for your OpenStack project’s JavaScript-based sections.

Thanks for checking out our website.

Categories
Uncategorized

An introduction to OpenStack clouds for beginners

What is OpenStack? Who might use it?

OpenStack is an open source cloud operating system written in Python
to manage pools of compute, storage, and networking resources via
command-line interface (CLI) or a web-based dashboard. It is designed to
run on commodity hardware and is sometimes referred as Infrastructure
as a Service (IaaS). OpenStack runs on common Linux platforms such as
RHEL, SUSE, or Ubuntu.

OpenStack is an infrastructure (or in simpler terms, a cloud). It can
create an environment that provides on-demand increase or decrease of
resource allocation, and the resources are not limited to a single
location. Big data, web services, and Network Function Virtualization
(NFV) for service providers are all good applications for OpenStack.

What are the key services and components of OpenStack? What do they do?

OpenStack follows a bi-annual release cycle, with each release
identified by a name instead of number, so the first release was Austin,
the current release is Mitaka, and the previous releases were Liberty
and Kilo, respectively. Since the Kilo release, OpenStack has started to
shift from the incubation/integrated model to the Big Tent model, where projects are tagged with specific attributes.

The major components of a cloud infrastructure are compute, storage,
and networking. These used to be called the core services of OpenStack,
while all others were called the shared services.

Compute:

  • Nova: Provides virtual machines (VMs) on demand.

Storage:

  • Swift: Provides a scalable storage system that supports object storage.
  • Cinder: Provides persistent block storage to guest VMs.

Networking:

  • Neutron: Provides network connectivity as a service between interface devices managed by OpenStack services.

Shared services:

  • Keystone: Provides authentication and authorization for all the OpenStack services.
  • Glance: Provides a catalog and repository for virtual disk images.
  • Horizon: Provides a modular, web-based user interface for OpenStack services.
  • Ceilometer: Provides a single point of contact for billing systems.
  • Heat: Provides orchestration services for multiple composite cloud applications.
  • Trove: Provides database-as-a-service (DBaaS) provisioning for relational and non-relational database engines.
  • Sahara: Provides a service to provision data intensive application clusters.
  • Magnum: Offers container orchestration engines for deploying and managing containers.

I have listed only the most common projects. New projects are added in each release.

Since switching to the Big Tent approach, more and more projects are
now considered a part of OpenStack. There is a committee working on OpenStack DefCore, a minimum required feature set which products must comply with in order to use the OpenStack name.

Why use OpenStack and not just a traditional virtualization tool? What value does it provide over hypervisor?

Virtualization tools abstract the resource from the physical hardware and allow for automation.

OpenStack pushes this one step further by providing an elastic,
self-service, and measurable infrastructure for managing a pool of
compute, storage, and networking resources. The resources that OpenStack
manages can be either physical or virtual.

How can OpenStack work with containers? Why might an enterprise wish to do this?

Project Magnum
uses OpenStack as an infrastructure to deploy Docker containers. Before
project Magnum, Docker container was listed as a hypervisor type in
Nova (a compute service of OpenStack).

In project Magnum, there is a concept of a pods, bays, and services
which together as if they were a single application to which access
policy can be applied.

The container orchestration engine (COE) allows for the deployment of
multiple Docker containers as a unit. At this time, the supported COEs
in Magnum are:

One of the popular container applications in the enterprise space is
microservices, wherein a big, monolithic application is divided into
“micro-services” implemented in the form of containers). This new trend
in application deployment provides agility, scalability, and high
availability.

The Liberty release introduced project Kuryr, which is built on top of Neutron and addresses networking issues specific to containers in an OpenStack infrastructure.

What does a typical OpenStack deployment look like?

I don’t think there’s such thing as a typical OpenStack deployment,
and that’s the beauty of it. While it is not a one-size-fits-all
product, OpenStack offers a very flexible and rich infrastructure. What
it can offer is limited only by what the architect can come up with.
OpenStack is just like a LEGO set; we can pick and chose to fit a
particular deployment requirement. Not only are the resources in
OpenStack elastic, but the feature set is also elastic in a sense that
we can add and delete feature sets.