• Email Address: forum@outsourcepath.com
English
  • English
Resources, Articles, Tricks, and Solutions about Server Management Service

cPanel reported error number 1 when it ended - Fix it Now ?

This article covers methods to resolve cPanel reported error number 1 when it ended. This error makes cPanel copy action on an account from another server to fail.

The error means that the destination server isn't able to login into the source server to fetch files .


To fix this error, do the following:

1) Recheck your username and password used along with the IP too.

2) Firewall, the source firewall is blocking your destination server IP. 

You would need to login into the WHM of the source server and goto "ConfigServer Security & Firewall".

Retry you transfer and it should be working.

Import Nagios Log Server Into Microsoft Hyper-V - How to Do it?

This article covers how to easily Import Nagios Log Server Into Microsoft Hyper-V. Basically, to Import Nagios Log Server Into Microsoft Hyper-V we need to follow a series of steps outlined in this guide. 

cPanel Error Iproute Conflicts With Kernel - Fix it Now ?

This article covers method to fix cPanel Error Iproute Conflicts With Kernel. Basically, this error happens when we have an outdated kernel on the server. 


Instead of deleting conflicting kernels, you can also add the iproute package to the excludes of yum in /etc/yum.conf file, then the iproute package won't be marked for the update.

It can be useful when you need to perform an update but can't reboot the server at the given moment. 

It can be excluded manually using a preferred text editor or using the following command:

$ sed -i 's/exclude=/exclude=iproute /' /etc/yum.conf

The change can be reverted using this command:

$ sed -i 's/exclude=iproute /exclude=/' /etc/yum.conf

Configure SSL / TLS in Nagios Log Server - How to do it ?

This article covers how to configure SSL/TLS in Nagios Log Server. SSL/TLS provides security between the end user's web browser and Nagios Log

Server by encrypting the traffic. This guide is intended for use by Nagios Log Server Administrators who require encrypted connections to their Nagios Log Server.

cPanel error networkmanager is installed and running

This article covers how to resolve the error, NetworkManager is installed and running. Basically, this error happens if cPanel does not support NetworkManager-enabled systems.


To fix this error, simply run the commands below and then restart the installation of cPANEL:

$ systemctl stop NetworkManager.service
$ systemctl disable NetworkManager.service

Managing Instances In Nagios Log Server

This article covers Instances in Nagios Log Server and how we can manage them. 

Nagios Log Server is a clustered application, it consists of one or more instances of Nagios Log Server. An

instance is an installation of Nagios Log Server, it participates in the cluster and acts as a location for the

received log data to reside. The log data is spread across the instances using the Elasticsearch database, a

special database used by Nagios Log Server.

Install Wazuh Server on CentOS 7 - Step by Step Process ?

This article covers the installation procedure of Wazuh Server on CentOS Linux System. Basically, Wazuh is a free, open-source and enterprise-ready security monitoring solution for threat detection, integrity monitoring, incident response and compliance. 


You can use Wazuh for the following applications:

  • Security analysis
  • Log analysis
  • Vulnerability detection
  • Container security
  • Cloud security


To Install Java on CentOS 8.

1. Run the command below to install JDK:

$ sudo dnf install java-11-openjdk-devel

2. Confirm that you have it installed

$ java -version

Alerting On Log Events With Nagios Log Server

This article covers Alerting On Log Events With Nagios Log Server. Basically, for alerting on Log Events with Nagios Log Server one needs to be familiar with the options available.

With this guide, you will learn how to create various alerts in Nagios Log Server, such as sending

them to a Nagios XI or Nagios Core monitoring server using Nagios Remote Data Processor

(NRDP), sending an email, sending SNMP traps and executing scripts.

Analyzing Logs With Nagios Log Server

This article covers how to analyze  logs with Nagios Log Server.  Basically, in order to analyze logs with Nagios Log Server one needs to be familiar with the options in the Dashboards menu. This guide is very essential to Nagios Log Server administrators and users looking for information on querying, filtering and drilling down the data in Nagios Log Server.

You can audit your IT infrastructure, maintain historical records of usage of IT infrastructure, create reports, and analyze logs using the Nagios Log Server.

Access denied to VNC Server - How to fix this error ?

This article covers methods to fix the error, Access denied to VNC Server. Basically, this error occurs while trying to connect to a VNC server using a cloud connection. This message means that your RealVNC account has been signed out of VNC Viewer.

This will happen if you have recently changed the password for your RealVNC account, for example.


To resolve this VNC connection issue, click Sign in again and enter your RealVNC account credentials.

Once you see a green tick/check mark in the top right next to your name, try connecting to the VNC Server again.

SNMP Trap Hardening in Nagios - How it Works ?

This article covers how to go about SNMP Trap Hardening in Nagios.


When using the vi editor:

1. To make changes press i on the keyboard first to enter insert mode

2. Press Esc to exit insert mode

3. When you have finished, save the changes in vi by typing :wq and press Enter

 

How to Send Test Trap ?

When working through this documentation you may want to test the changes by sending a test trap. The following KB article provides examples on how to send a test trap, which can be very helpful:


How To Send A SNMP Test Trap ?

When a test trap is received on the Nagios XI server it should be logged in the /var/log/snmptt/snmpttunknown.log file.

The default SNMP Trap configuration is stored in the /etc/snmp/snmptrapd.conf file and contains just two lines:

disableAuthorization yes
traphandle default /usr/sbin/snmptthandler

NRPE Command Plugin Not Defined - How to fix it ?

This article covers methods to resolve 'NRPE Command Plugin Not Defined' for our customers.

This error is very straight forward. Usually this is caused by a mismatch between the command name declared in Nagios XI to be check through NRPE and the actual command name of the command directive in the remote host's nrpe.cfg file.

This problem will occur in versions of check_nrpe before v3. 

What is happening here is that the initial -c check_users is being overwritten by the -a -w 5 -c 10, as check_nrpe thinks the -c 10 argument is the command argument, not one of the -a arguments.

SNMP Trap v3 Configuration in Nagios - How to get it done ?

This article covers How to configure SNMP Trap v3 on the Nagios XI server.

The main difference between v2 and v3 traps is the authentication mechanisms. v2 is much simpler by design whereas v3 has multiple layers of authentication to strengthen it. Probably the biggest difference is that the SNMP Trap Daemon (snmptrapd) is configured by default to accept v2 traps from any device regardless of what SNMP community is provided. 

However snmptrapd cannot be configured to accept traps v3 from any device, it must be configured before it can receive an SNMP v3 trap.


The default SNMP Trap configuration is stored in the /etc/snmp/snmptrapd.conf file and contains just two lines:

disableAuthorization yes
traphandle default /usr/sbin/snmptthandler

 

The disableAuthorization directive allows SNMP v2 traps from any device to be sent to Nagios XI. 

Even if this line exists the Nagios XI server will not be able to receive SNMP v3 traps unless the server has been specifically configured for SNMP v3 traps.

Install PHP 8 on CentOS 7 / CentOS 8 - Step by Step Process ?

This article covers how to install PHP 8.0 on CentOS 8/7 and RHEL 8/7.

PHP is the most used scripting language for web development, both websites and web applications.

This guide will show you how to install PHP 8.0 on CentOS 8 | CentOS 7. 

Please note the GA release is fit for running in Production if the application already supports it.


To install any additional PHP package use command syntax:

$ sudo yum install php-xxx

To Check PHP version:

$ php --version

Install PHP 8 on Ubuntu 20.04 or 18.04 - Step by Step Process ?

This article covers steps to install PHP 8 on Ubuntu. PHP is arguably one of the most widely used server-side programming languages. It's the language of choice when developing dynamic and responsive websites. Basically, popular CM platforms such as WordPress, Drupal, and Magento are based on PHP.


To Install PHP as Apache Module

Run the commands:

$ sudo apt update
$ sudo apt install php8.0 libapache2-mod-php8.0

Once the packages are installed, restart Apache for the PHP module to get loaded:

$ sudo systemctl restart apache2


To Configure Apache with PHP-FPM

Php-FPM is a FastCGI process manager for PHP. 

1. Run the following command to install the necessary packages:

$ sudo apt update
$ sudo apt install php8.0-fpm libapache2-mod-fcgid

2. By default PHP-FPM is not enabled in Apache. 

To enable it, run:

$ sudo a2enmod proxy_fcgi setenvif
$ sudo a2enconf php8.0-fpm

3. To activate the changes, restart Apache:

$ systemctl restart apache2 


To install PHP 8.0 with Nginx

Nginx doesn't have built-in support for processing PHP files. We'll use PHP-FPM ("fastCGI process manager") to handle the PHP files.

Run the following commands to install PHP and PHP FPM packages:

$ sudo apt update
$ sudo apt install php8.0-fpm

Once the installation is completed, the FPM service will start automatically. 

To check the status of the service, run

$ systemctl status php8.0-fpm

Do not forget to restart the Nginx service so that the new configuration takes effect:

$ sudo systemctl restart nginx

Send test SNMP trap in Nagios - How does this work ?

This article covers how to send a trap to Nagios server to test SNMP Trap functionality.

Basically, when troubleshooting an SNMP Trap issue, it can be very helpful to remove the actual device that could be causing problems and use the snmptrap command instead.

So in this guide, you will learn all the methods of sending a trap to your Nagios server to test SNMP Trap functionality.


SNMP Trap Definition

The following trap definition can be placed in /etc/snmp/snmptt.conf which will allow the test traps sent above to be passed through to Nagios:

EVENT netSnmpExampleHeartbeatRate .1.3.6.1.4.1.8072.2.3.0.1 "netSnmpExampleHeartbeatRate" Normal
FORMAT SNMP netSnmpExampleHeartbeatRate
EXEC /usr/local/bin/snmptraphandling.py "$r" "SNMP Traps" "$s" "$@" "" "netSnmpExampleHeartbeatRate"


The default SNMP Trap configuration is stored in the /etc/snmp/snmptrapd.conf file and contains just two lines:

disableAuthorization yes
traphandle default /usr/sbin/snmptthandler

SMTP error no route to host - Fix it Now ?

This article covers methods to resolve SMTP error no route to host.

This error happens when the port is blocked at the hosting end or ISP.

In this guide we have outlined different methods to fix this SMTP error.

Install PHP 8 on Debian 10 / Debian 9 - Step by Step Process ?

This article covers how to install PHP 8 on any Linux distribution.


To install Apache with PHP 8 module:

$ sudo apt install apache2 libapache2-mod-php8.0 

After successful installation, restart Apache service to reload newly installed modules:

$ sudo systemctl restart apache2 


To check loaded PHP modules use the command:

$ php -m

PHPMailer SMTP error password command failed - Fix it Now ?

This article covers methods to resolve PHPMailer SMTP error password command failed. 


To fix this SMTP issue, you need to:

1) Login to your Gmail account using the web browser.

2) Click on this link to enable applications to access your account: https://accounts.google.com/b/0/DisplayUnlockCaptcha

3) Click on Continue button to complete the step.

4) Now try again to send the email from your PHP script. It should work.

Install Docker CE on AlmaLinux 8 - Step by Step Process ?

This article covers step by Step process to install Docker CE on AlmaLinux.

Docker is a tool that is used to run software in a container.

It's a great way for developers and users to worry less about compatibility with an operating system and dependencies because the contained software should run identically on any system.


To Install Docker on AlmaLinux:

1. We can add the Docker repository to our system with the following command.

$ sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

2. Before we begin installing Docker, we need to remove the podman and buildah packages from our system, as they conflict with Docker and will inhibit it from being installed.

$ sudo dnf remove podman buildah

3. Finally, we can install the three Docker packages we'll need by executing the following command.

$ sudo dnf install docker-ce docker-ce-cli containerd.io

4. Once installation is completed, start the Docker service and, optionally, enable it to run whenever the system is rebooted:

$ sudo systemctl start docker.service
$ sudo systemctl enable docker.service

5. You can verify that Docker is installed and gather some information about the current version by entering this command:

$ sudo docker version

DirectAdmin PhpMyAdmin error 500 - Fix it Now ?

This article covers methods to resolve DirectAdmin PhpMyAdmin error 500. This error happens as a result of a number of reasons that include PHP settings, modsecurity rules and so on.


To resolve this error, In the library is /usr/share/phpmyadmin/libraries/sql.lib.php,

You need to modify the file:

From && ($analyzed_sql_results['select_expr'][0] == '*')))

to && ($analyzed_sql_results['select_expr'][0] == '*'))

Drupal SMTP error "could not connect to smtp host" - Fix it Now ?

This article covers methods to resolve Drupal SMTP error "could not connect to smtp host".

Basically, this error happens as a result of improper Drupal SMTP settings such as wrong SMTP server name, wrong port settings, and so on. 


To resolve this SMTP error, follow the steps given below:

1. Login into myaccount.google.com.

2. Click on the link connected apps & sites.

3. Allow less secure apps" to "ON" (near the bottom of the page).


Also, you can try the following to fix this SMTP error,

1. System access configuration

Need to allow access for firewall or network to send mail for linux, windows and mac.

Following command set permission for linux.

i.  iptables -I OUTPUT -p tcp --dport 465 -j ACCEPT

ii.  iptables -I OUTPUT -p tcp --dport 587 -j ACCEPT


2.  SMTP Authentication Support

Set your gmail and google app information.

If you want to use Gmail as SMTP server,

SMTP server: smtp.gmail.com

SMTP port: 465

Use encrypted protocol: Use SSL 

SMTP Authentication

Username : youremail@gmail.com or ouremail@yourGoogleAppsDomainName.com(google app)

Password : yourpassword

Note : Remove leading and trailing space from "SMTP Authenitcation Username" if is there otherwise it's not authenticate your request to gmail.

Enable EPEL repository on AlmaLinux 8 - Step by Step Process ?

This article covers how to enable EPEL repository on AlmaLinux 8.

Extra Packages for Enterprise Linux (EPEL) is repository with a high quality set of additional packages for Enterprise Linux operating systems such as Red Hat Enterprise Linux (RHEL), CentOS and Scientific Linux (SL), Oracle Linux (OL), AlmaLinux and any other Linux distribution from the RHEL family.


Run the command below to install EPEL Repository on AlmaLinux OS 8:

# sudo yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm

Accept installation using the y key.

Install WHM Cpanel on AlmaLinux 8 Server - Step by Step Process ?

This article covers the complete steps to install WHM Cpanel on AlmaLinux 8 Server.

To manage Linux servers for hosting purposes most of the services are using WHM's Cpanel.

Although there are many hosting manager software, because of its easy-to-use interface and features, it is one of the best control panels for Web hosting services. 


cPanel, Control Panel, is a control panel developed in 1997 that offers us high-quality web hosting with excellent features that, thanks to WHM (Web Host Manager), can be managed from a graphics console, with everyone doing their job.

iisnode encountered an error when processing the request - Fix it Now ?

This article covers methods to resolve "iisnode encountered an error when processing the request" error.

Basically, this iisnode error triggers when the application pool doesn't have enough permissions to write to the current folder. 

Therefore, you need to Allow, 'Full Control', for user 'IIS_IUSRS', from 'Advanced Security' upon right clicking your application root directory.


This error simply denotes that Your application pool doesn't seem to have enough permissions to write to the current folder.

1. You can either edit the permissions to give the IIS_IUSRS group write permissions to that folder

2. Go into the advanced settings menu and under Process Model -> Identity change the user account to a user that already has write permissions.

Install CloudPanel Control Panel on Debian 10 - Step by Step Process ?

This article covers how to Install CloudPanel Control Panel on Debian 10. With Cloud panel, you can manage MySQL, NGINX, PHP-FPM, Redis, Domain, FTP, User management, and many more from the web-based interface. 

It supports all major cloud providers including, AWS, Google, Digital Ocean, and specially designed for high performance with minimal resource usage.

It also offers a CLI tool that helps you to perform several operations including, database backup, password reset, permissions, and more.


To Install CloudPanel on Debian Linux:

1. You can download it with the following command:

# curl -sSL https://installer.cloudpanel.io/ce/v1/install.sh -o cloudpanel_installer.sh

2. Once the script is downloaded, set proper permission to the downloaded script with the following command:

# chmod +x cloudpanel_installer.sh

3. Next, run the script using the following command:

$ ./cloudpanel_installer.sh


Main Features of CloudPanel as listed in the official project website are:

1. It is open source and free to use

2. It provides a powerful intuitive interface for management

3. It is secure – provision of free SSL/TLS certificates

4. Designed for high Performance with minimal resource usage

5. It supports all major clouds – AWS, Digital Ocean, GCP, e.t.c

6. CloudPanel is available in more than ten languages, making it easy to install in any region

Graphs not recording for ICMP and ping checks - Fix it Now ?

This article covers method to fix Nagios error, Graphs not recording for ICMP and ping checks.

This problem happens after upgrading to Nagios XI 2014.


Solution to Nagios XI ICMP and Ping Checks Stopped Graphing:

1. First a Perl package needs to be installed using one of the commands below:

$ apt-get install -y librrd-simple-perl

2. Download and unzip the required files:

cd /tmp
wget https://assets.nagios.com/downloads/nagiosxi/scripts/rrd_ds_fix.zip
unzip rrd_ds_fix.zip

3. To run the script with RRD backups:

./fix_ds_quantity.sh -d /usr/local/nagios/share/perfdata/

Install Ajenti Control Panel on Ubuntu 20.04 - Step by Step Process ?

This article covers step by step procedure to install Ajenti Control Panel on Ubuntu 20.04 for our customers.

Ajenti is a free to use and open source Server management and configuration Panel written in Python, JavaScript, and AngularJS. It provides a web dashboard for administration as opposed to command line management.

With this tool you can manage websites, DNS, Cron, Firewall, Files, Logs, Mail hosting services and so on.


The Ajenti Project consists of Ajenti Core and set of stock plugins forming the Ajenti Panel.

1. Ajenti Core: Web interface development framework which includes a web server, IoC container, a simplistic web framework and set of core components aiding in client-server communications.

2. Ajenti Panel: Consists of plugins developed for the Ajenti Core and a startup script, together providing a server administration panel experience.


To Install Ajenti Control Panel on Ubuntu 20.04:

1. Update and upgrade your Ubuntu machine.

$ sudo apt update
$ sudo apt dist-upgrade

2. If the upgrade is completed reboot the system before initiating installation of Ajenti on Ubuntu 20.04.

$ sudo systemctl reboot

3. There is a script provided for the installation of Ajenti control panel on Ubuntu 20.04. First download the script with curl.

$ curl -O https://raw.githubusercontent.com/ajenti/ajenti/master/scripts/install.sh

4. Run the installer script with sudo command.

$ sudo bash ./install.sh

PrestaShop back office error 500 or blank page - Fix it Now ?

This article covers methods to resolve PrestaShop back office error 500.

The error happens when the Back-office is accessed only from Debug mode activated or in production mode activated. The characteristic of this error is that it only occurs in one of the modes and not in both.

Also, this is an error that is occurring ONLY IN PRESTASHOP STORES VERSION 1.7, and that makes it impossible for us to enter the Back-office, showing an error 500 or the page goes blank. 

That's why we call it a critical mistake, for leaving the store inoperative.


To fix a HTTP 500 error on PrestaShop online store website:

You can activate your web host's FTP or CPanel error reports in your PrestaShop shop.

1.  From PrestaShop v1.4 to v1.5.2

i. Open config/config.inc.php

ii. On line 29, you will find this line: @ini_set('display_errors','off');

iii. Replace it with: @ini_set('display_errors','on');


2. PrestaShop v1.5.3 and later versions (including 1.6 and 1.7)

i. Open config/defines.inc.php

ii. On line 28, you will find this line: define('_PS_MODE_DEV_', false);

iii. Replace it with: define('_PS_MODE_DEV_', true);

Once error reports from the FTP or CPanel are activated, you can browse your store's front or back office to find out what the problem is.


More about Server error 500:

Error 500 means Internal server Error. Whenever a 500 error occurs, the task to return information by the server to the web browser will stop. 

Therefore, as we mentioned above, this is a critical error that would leave the website inoperative.

The 500 errors, as we noted above, are internal server errors and their origin may be in a programming code error on any item that is included and related to the request for information returned by the server to the web browser. 

Therefore, not all 500 errors come from the same source.

WordPress error "Your Connection Is Not Private" - Fix it Now ?

As a result of the SSL connection, sensitive information is protected from being stolen while being transferred between the server and the browser, which is one step in hardening your WordPress security

"Your Connection Is Not Private Error" message means that Google Chrome is preventing you from visiting your site because it is untrusted.

Ultimately, the chrome browser prevents you from gaining access to your website because the SSL certificates cannot be validated.

Typically, the "connection is not private" error in google chrome originates from issues from the side of the client, or from problems with the site's certificate.


To Fix Your Connection Is Not Private WordPress Error:

1. Reload the page

2. Check your network connection

3. Set time and date on your computer

4. Try browser's incognito mode

5. Clear your Browser Cookies, Cache, and History 

6. Disable Antivirus Temporarily

7. Update your Operating System

8. Restart your Computer

Rebooting your device will help clear out the temporary cache. 

This could surely very well fix your issue.

Prestashop parse error – How to fix the syntax error ?

This article covers ways to resolve Prestashop parse error.

Basically, Prestashop parse error happens when we install Prestashop 1.7 or above or accessing the admin panel of the store.


To fix this error:

1. You can start with emptying your cache by deleting the /var/cache directory.

If that doesn't work copy lines 28-30 from another shop and save it.

2. Also you can try to remove the following:

# php -- BEGIN cPanel-generated handler, do not edit
# Set the “ea-php72” package as the default “PHP” programming language.
<IfModule mime_module>
  AddHandler application/x-httpd-ea-php72 .php .php7 .phtml
</IfModule>
# php -- END cPanel-generated handler, do not edit

From end of .htaccess. 

Just to be sure if that is not source of issues.

Cloudflare 502 error - Fix it Now ?

This article covers methods to resolve Cloudflare 502 error. Basically, the Cloudflare 502 error triggers when the origin web server responds with a standard HTTP 502 bad gateway or 504 gateway timeout error. 

This happens due to firewall restrictions and server resource issues.


Cause of 502 Bad Gateway Errors:

1. Domain name not resolvable

The domain name is not pointing to the correct IP or it does not point to any IP this problem may happen. Also, DNS propagation could take some time to make changes in DNS setting. It may take 24 to 48 hours to make reflect which is dependent upon the TTL defined per record on the DNS.

2. server down

The origin server is not reachable, this may due to the server is down for some reason or there is no communication to the server given.

3. Firewall blocks

A firewall interrupts the communication between the edge servers and the origin server. This may be caused by security plugins of your CMS.

As a part of DDOS protection and mitigation process or due to some strict firewall rules servers can be blocked from accessing the original server.

Nginx upstream timed out error - Fix it Now ?

This article covers methods to resolve Nginx upstream timed out error. Basically, this error happens as a result of server resource usage and software timeouts.

A possible issue here could be that PHP is using too much RAM and the PHP FPM process gets killed. 


Therefore, do the following to fix this nginx error:

1. Make sure that there is enough RAM on the server, to check that you could use the top, htop or free -m commands.

2. Make sure that the PHP memory limit is not too high compared to the actual available memory on the Droplet. 

For example if you have 1GB of RAM available your PHP memory limit should not be more than 64MB otherwise only a few processes could consume all of your memory.

3. Optimize your website by installing a good caching plugin, that way you would reduce the overall resource usage on the server.

4. Delete any plugins that are not being used. Generally speaking, it is always recommended to try and keep the number of your plugins as low as possible.

5. Consider using a CDN like Cloudflare, that way it would offload some of the heavy liftings from your Droplet. 

Nagios error Unable To Login Using Two Factor Authentication

This article covers how to resolve Two Factor Authentication error in Nagios. 


To Reset nagiosadmin account Password:

1. Open an SSH or direct console session to your Nagios XI host and execute the following command:

/usr/local/nagiosxi/scripts/reset_nagiosadmin_password.php --password=newpassword

Note: If you would like to use special characters in your password, you should escape them with "\".

For example, if you want to set your new password to be "$new password#", then you can run:

/usr/local/nagiosxi/scripts/reset_nagiosadmin_password.php --password=\$new\ password\#

WordPress error "Sorry this file type is not permitted for security reasons" - Fix it Now ?

This article covers methods to resolve WordPress error "Sorry this file type is not permitted for security reasons". Basically, "Sorry this file type is not permitted for security reasons" in WordPress occurs when we try to upload a document to the WordPress library.

As we explained above, WordPress default configuration limits the types of files that you can upload to your site for security reasons.


To Fix "Sorry, This File Type Is Not Permitted for Security Reasons" Error in WordPress, Try to Use the Free WP Extra File Types Plugin:

1. If you'd prefer not to edit your wp-config.php file and/or you want more control over exactly which file types can be uploaded to your site, you can use the free WP Extra File Types plugin at WordPress.org

2. Once you install and activate the plugin, go to Settings → Extra File Types in your WordPress dashboard.

3. There, you'll see a lengthy list of file types. Check the box next to the file type(s) that you want to be able to upload and then click Save Changes at the bottom.

4. If you don't see the file type that you'd like to upload on the list, you can also add your own custom file types at the bottom of the plugin's settings list.

SNMP MIB Upload Problems in Nagios – Fix it Now ?

This article covers Nagios SNMP MIB Upload Problems.

This issue happens while uploading SNMP MIB files and it could be as a result of insufficient permissions on the SNMP MIB files.


Execute the following commands to reset the permissions and ownership on the Nagios SNMP MIB files:

# chmod -R ug+rw /usr/share/snmp/mibs
# chown -R root:nagios /usr/share/snmp/mibs

After executing those commands you should be able to upload the MIB file that previously did not work.

SNMPTT Service generates Cannot find module errors in Nagios

This article covers methods to fix "Cannot find module" errors in Nagios.

Basically, SNMPTT Service generates "Cannot find module" errors in Nagios when a MIB file contains spaces in the filename. 

The MIB files are located in the /usr/share/snmp/mibs/ folder.

This was identified as an issue and resolved in Nagios XI 5.4.0. As of version 5.4.0, when you upload MIBs via the Manage MIBs page the filename will have any spaces replaced with an underscore.

Reset Upgrade Status In Nagios Web Interface - How to Perform it ?

This article covers how to Reset Upgrade Status In Nagios Web Interface.

When upgrading Nagios XI using the web interface the upgrade progress may stall with the message "Upgrade in progress". 

Sometimes you will need to clear this message manually due to unforeseen circumstances, this guide explains how to clear the message.


To Reset Upgrade Status in Nagios:

The following command will reset the upgrade status on Nagios XI (using the default username and password listed above):

For MySQL/MariaDB:

mysql -u'nagiosxi' -p'n@gweb' nagiosxi -e "update xi_commands set status_code = '2' where command = '1120';"

Host Still Visible After Deletion in Nagios - How to resolve ?

This article covers steps to resolve Host Still Visible After Deletion (Ghost Hosts). Basically, by following this guide, you can easily resolve the error, Host Still Visible After Deletion in Nagios.

It is possible that you have multiple instances of nagios running or you have so called "ghost" hosts or services.

In order to check for multiple instances of nagios, run the following command from the command line:

$ ps -ef | head -1 && ps -ef | grep bin/nagios

Problems Using Nagios With Proxies - Fix it Now ?

This article covers how to resolve Problems Using Nagios With Proxies which arises if we do not configure proxies correctly while using Nagios.

Note that the Nagios XI code makes several internal HTTP calls to the local Nagios XI server to import configuration data, apply configuration changes, process AJAX requests, etc. 

These functions may not work properly when you deploy a proxy if it is not configured properly, which could result in a non-functional Nagios XI installation.

Migrate Performance Data in Nagios - Step by Step Process ?

This article covers how to perform Migrate Performance Data in Nagios.

Basically, to migrate, we have to convert the data to XML and import it into RRD’s on the new machine.

Historical performance data that is used to generate graphs are stored in Round Robin Database (RRD) files.

RRD performance data files are compiled binaries, so for a simple file transfer a user would have to have the architecture match on both machines.

Nagios mysql_error out of range value for column - Fix it now ?

This article covers Nagios error, mysql_error out of range value for column which is evident in the /var/log/messages file on the Nagios XI server.

To resolve this issue you will need to define the SQL Mode in the MySQL / MariaDB my.cnf configuration file:

1. Locate the [mysqld] section and check to see if there is an sql_mode already defined:

[mysqld]
# Recommended in standard MySQL setup
sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES

2. If the sql_mode= line already exists you will need to replace it with the following. 

If the line does not exist you will need to add the following line:

[mysqld]
sql_mode=""

Prestashop error "an error occurred while sending the message" - Fix it Now ?

This article covers methods to fix Prestashop error "an error occurred while sending the message".

This error happens when the theme that we use does not adapt to the latest version of Prestashop.

To resolve this error, you can try modifying the contact form file.

Add this before the submit button :

<style>
input[name=url] {
display: none !important;
}
</style>
<input type="text" name="url" value=""/>
<input type="hidden" name="token" value="{$token}" />

Can't connect to mysql error 111 - Fix this error now ?

This article covers methods to fix mysql error, 'Can't connect to mysql error 111' on Linux machine for our customers.
This can happen when there was a host IP change. 

This issue can prevent connection to the database.
As it turned out if you do come accross this look in /etc/my.cnf, there is a line:

bind-address = ip.add.ress

This may be the old address for the server and this will stop connections, change this to your new address and restart MySQL/MariaDB and you should be good again.

Plesk Webmail Server Not Found - Fix it Now ?

This article covers methods to resolve the Plesk error "Webmail Server Not Found" which can happen while opening webmail/domain in a browser or when we issue Let's Encrypt on the domain. The main reason for this error is that the Webmail / domain does not resolve correctly in global DNS system because Plesk server is not set up to manage DNS.

To use DNS with a Plesk server:
1. DNS Server should be installed in Plesk Installer.
2. Log into Plesk and find your Name Servers in Plesk > Domains > example.com > DNS Settings, take the "value" for the record type "NS".
3. Then, it is required to change Name Server for your domain at your domain registrar's account.

503 bad sequence of commands - Fix it now ?

This article covers methods to fix the email error, "503 bad sequence of commands" which happens as a result of a number of reasons.

To resolve SMTP response: 503 Bad sequence of commands, consider the following and also read the complete guide here.
The IP that should be 10.0.0.0 - whatever that is, look at that box, this is where the problem is likely to be.
Your internal domain is domain.com or domain.net etc.
You may also add a test email account and check if it works as expected. If this works, then this could be an issue with the email account or with the service provider.

Shrink VMDK Virtual Disk Size on VMWare ESXi - Do it Now ?

This article covers how to shrink VMDK Virtual Disk Size on VMWare ESXi.
By default, VMware creates "growable" disks that grow larger in size as you add data.

Unfortunately, they don't automatically shrink when you remove data.
You'll need to clean up or compact your disks to actually free up space on your hard drive.

VMware Workstation also allows you to create snapshots, which contain a complete "snapshot" of a virtual machine's state at the point in time you created them.
These can take a lot of space if the virtual machine has changed significantly since then.
You can free up additional space by deleting snapshots you no longer need.

1. To view the snapshots for a virtual machine, select the virtual machine in VMware Workstation and click VM > Snapshot > Snapshot Manager.
2. To delete a snapshot you no longer need, right-click it in the Snapshot Manager window and select "Delete". It will be removed from your computer.
3. You won't be able to restore your virtual machine to that previous point in time after deleting the snapshot, of course.

Before we try to shrink the virtual disk files, we should try to remove any unneeded files from the virtual machine to free space.

For example, on Debian-based VMs, you can run:

$ apt-get clean

To clear out the local repository of retrieved package files.
Next, run the command below to fill the unused space with zeros:

cat /dev/zero > zero.fill;sync;sleep 1;sync;rm -f zero.fill


Free Disk Space In VMware Workstation

In VMware Workstation, first power off the virtual machine you want to compact. You can't complete this process if it's powered on or suspended.
1. Select the virtual machine you want to compact in the main window and click VM > Manage > Clean Up Disks.
2. The tool will analyze the selected virtual machine's disk and show you how much space you can reclaim.
To reclaim the space, click "Clean up now".
If no space can be freed, you'll see a "Cleanup is not necessary" message here instead.

PiP is not recognized as an internal or external command - Fix it Now ?

This article covers different methods to resolve PiP is not recognized as an internal or external command.

Basically, the error, "PiP is not recognized as an internal or external command" happens when we try to install Python packages via a Command Prompt window.
PiP is a recursive acronym for "Pip Installs Packages".

It's essentially a package management system used to install and manage software packages written in Python. Most users make use of PiP to install and manage Python packages found in the Python Package Index.

To add PIP to the PATH environment variable using the Windows GUI:

1. Press Windows key + R to open up a Run dialog box. Then, type "sysdm.cpl" and press Enter to open up the System Properties screen.
2. Inside the System Properties screen, go to the Advanced tab, then click on Environment Variables.
3. In the Environment Variables screen, go to System variables and click on Path to select it. Then with the Path selected, click the Edit… button.
4. In the Edit environment variable screen, click on New and add the path where the PiP installation is located. For Python 3.4, the default location is C:\Python34\Scripts.
5. Once the path is added, open a fresh CMD window and try to install a python package that comes with PiP.

You should no longer see the "pip is not recognized as an internal or external command" error.

Undefined index notice in Joomla - Fix it now ?

This article covers how to resolve Undefined index notice in Joomla. A notice, in PHP terms (PHP is the scripting language that powers Joomla), is more or less a complaint. For example, if you're using a deprecated function such as ereg_replace (by the way, we have had quite a few sites with the ereg_replace() is deprecated notice that we needed to fix) then PHP will complain with a notice.
A notice may also be displayed if you're trying to use questionable casting that PHP thinks will not return the result that you want (for example, if you try to forcefully cast an array into a string).

To fix this Joomla warning:

Change PHP's error reporting in the .htaccess file to hide all errors.
You can do that by adding the following code to your .htaccess file:

php_flag display_startup_errors on
php_flag display_errors on
php_flag html_errors on

The above code will ensure that no error whatsoever will be displayed on your website.

Note that if you have an Error Reporting setting in your configuration settings other than "Default", then this setting will override the error reporting defined in your .htaccess.
For example, if your Error Reporting is set to "Maximum", then the above code in your .htaccess file has no effect.


Drupal 406 error - Fix it Now ?

This article covers methods to resolve Drupal 406 error.

There are many errors that you may see as you visit different websites across the web.

One of the more common ones is the 406 – Not Acceptable error.

Cause for Drupal 406 error:

In regards to a site on your hosting account, the cause of the 406 error is usually due to a mod_security rule on the server.
Mod_security is a security module in the Apache web server that is enabled by default on all hosting accounts.
If a site, page, or function violates one of these rules, server may send the 406 Not Acceptable error.

To prevent Drupal 406 Not acceptable error:

Mod_security can be turned off. You can also disable specific ModSecurity rules or disable ModSecurity for each domain individually.
If you would like mod_security disabled you can disable mod_security via our Modsec manager plugin in cPanel.

Windows error "The volume does not contain a recognized file system" - Fix it Now ?

This article covers methods to fix 'The volume does not contain a recognized file system' the Windows error for our customers.

What Caused The Volume Does Not Contain the Recognized File System Error ?

Here are some of the reasons that can cause the error occurs on devices:
1. System re-installation
2. Presence of virus or malware
3. Unsafe system shut down.
4. Failure of file system conversion
5. Deletion of essential system files by mistake
6. Presence of bad sectors
7. Users misapplication
8. Virus/Trojan infection
9. Insufficient power supply

To fix this Windows error:

1. Proceed to Start and click on My Computer or This PC.
2. Select that drive that is not accessible and then choose Properties by right-clicking it.
3. From the Properties window, select the Tool tab and click on the Check button from the Error checking.
4. Select the Scan Drive option.
Once the scanning process is complete, please go back to This PC or My Computer to check whether the drive is fixed or not.

Unable to connect to VNC Server using your chosen security setting - How to fix it ?

This article covers method to resolve Unable to connect to VNC Server using your chosen security setting.
Basically, the unable to connect error could trigger due to incompatible encryption settings or due to the version of the VNC running in the remote server.

To fix this VNC error, simply apply the following:

1. On the remote computer, change the VNC Server Encryption parameter to something other than AlwaysOff.
2. Change the VNC Viewer Encryption parameter to Server, PreferOn or PreferOff.

Shopify error 429 too many requests - Fix it Now ?

This article covers Shopify error 429 too many requests. Basically, 429 too many requests can trigger due to increased number of API requests.
Calls to the REST Admin API are governed by request-based limits, which means you should consider the total number of API calls your app makes.

In addition, there are resource-based rate limits and throttles.

To avoid rate limit errors in Shopify:

Designing your app with best practices in mind is the best way to avoid throttling errors.
1. Optimize your code to only get the data that your app requires.
2. Use caching for data that your app uses often.
3. Regulate the rate of your requests for smoother distribution.
4. Include code that catches errors. If you ignore these errors and keep trying to make requests, then your app won’t be able to gracefully recover.
5. Use metadata about your app's API usage, included with all API responses, to manage your app’s behavior dynamically.
6. Your code should stop making additional API requests until enough time has passed to retry.

The recommended backoff time is 1 second.

VNC Server is not currently listening for cloud connections - Fix it Now ?

This article covers methods to fix VNC 'Timed out waiting for the response from the host computer' error for our customers.

If you see this message when trying to connect from VNC Viewer, please check the following:
1. Check that the remote computer is connected to the Internet. If it isn't, you won't be able to connect.
2. It may be that the remote computer is asleep. It is advisable to prevent a remote computer sleeping or hibernating while remote access is required:
Windows: In Control Panel > Power Options > Change when the computer sleeps, make sure Put the computer to sleep when plugged in is Never.
Mac: In System Preferences > Energy Saver, make sure Prevent computer from sleeping automatically when the display is off is selected.
3. If you have disabled cloud connections in VNC Server's Options dialog, under the Connections heading. Make sure this is checked.
4. You may have the wrong team selected in VNC Viewer. Please ensure you have selected the correct team for the computer to which you want to connect.
5. If you have purchased a subscription but did not subscribe from within your trial team, you will need to join your computer(s) to the new, paid-for team.

How to Manage Indices in Nagios Log Server - Do this now ?

This article covers how to manage indices in Nagios log server. An index in Nagios Log Server is how the Elasticsearch database stores log data. Nagios Log Server creates and index for every day of the year, this makes it easy to age out old data when no longer required.
Nagios Log Server is a clustered application, it consists of one or more instances of Nagios Log Server. An instance is an installation of Nagios Log Server, it participates in the cluster and acts as a location for the received log data to reside. The log data is spread across the instances using the Elasticsearch database, a special database used by Nagios Log Server.

Enable CDN in Prestashop and Resolve related issues - How to do it ?

This article covers how to enable CDN on PrestaShop for our customers.
You can Speed up your website with the PrestaShop CDN addon.
Faster loading leads immediately to happier users and higher conversions.
Making your pages load faster will also improve your SEO.
Google ranks faster websites higher, so you'll soon receive more visitors from search engines.

To How to Enable CDN for Prestashop:

1. Make sure Prestashop is installed and work normally.
2. Login to Prestashop admin panel (e.g. http://prestashop.testing.com.my/admin1234/)
3. Navigate to Advanced Parameters > Performance.
4. Scroll down the page to Media servers and fill in the CDN hostname.
5. Click Save at the top right corner to save the setting.

Error: Function lookup() did not find a value for the name DEFAULT_EXEC_TIMEOUT

This article covers how to fix this issue found while installing OpenStack with packstack.
In the case you installed packstack with epel repo enabled, you need to uninstall it and all the dependences, and re-install it after disabling epel, so all the proper versions of dependencies are installed correctly.
1. To begin, ensure that epel repo is disabled and try again.
2. Run the following commands:

# yum autoremove epel-release
# yum autoremove openstack-packstack
# yum clean all
# yum install -y openstack-packstack

VNC error 'Timed out waiting for the response from the host computer' - Fix it Now ?

This article covers methods to fix VNC 'Timed out waiting for the response from the host computer' error for our customers.


1. You can try adding a firewall rule:

$ sudo /sbin/iptables -I INPUT 1 -p TCP --dport 5901:5910 -j ACCEPT

2. Or directly modify the file /etc/sysconfig/iptables file and add a line:

-A INPUT -p tcp -m state --state NEW -m tcp --dport 5901:5910 -j ACCEPT

3. Restart the iptables service:

$ service iptables restart

4. If there is no iptables.service file, use yum to install it:

$ yum install iptables-services

5. Then Run the command,

$ sudo /sbin/iptables -I INPUT 1 -p TCP --dport 5901:5910 -j ACCEPT

The firewall does not need to be restarted, nor does it execute flush privileges, and then connect with the VNC client and find that the connection is up.

Empty Screen in Nagios XI for Wizard - Fix it Now ?

This article covers ways to fix 'Empty Screen in Nagios XI' for our customers. In some pages of XI you may come across empty screens, such as no configuration wizards appearing under the Configure menu.
When plugins, components or wizards are not installed through the proper menus, this creates problems in Nagios XI, such as "wiping out" all wizards, so they can not be viewed in the Web interface, blank pages in the Web browser and other weird behaviors.

To fix this Nagios error:

Remove the problematic component/wizard by running in terminal as a root:

$ rm -rf /usr/local/nagiosxi/html/includes/components/somedashlet
$ rm -rf /usr/local/nagiosxi/html/includes/components/somecomponent
$ rm -rf /usr/local/nagiosxi/html/includes/configwizards/somewizard

Improve phone support efficiency - Facts you need to know ?

This article covers some key points to improve phone support efficiency.
In the contact centre environment, becoming more efficient involves reviewing internal processes, eliminating unnecessary steps and duplicate handling, and often implementing new processes. 

The main aims are to reduce call handling times and improve first call resolution.

Methods to Boost Contact Centre Efficiency:
1. Migrate to a cloud hosted solution
Cloud hosted technology solutions don't rely on equipment infrastructure, and therefore scale from five agents to one thousand and back again without incurring any downtime.
2. Improve your self-service options
Customers want choice.  They want to be able to take care of routine and straightforward interactions themselves.
3. Implement automatic call distribution
Poor call traffic distribution results in customers being sent to the wrong department, which ties up personnel and delays them in getting on with helping the right people.
4. Improve your verification process
The average time to verify the identity of a user using Personal Verifiable Questions (PVQs) is 45-90 seconds.
5. Offer a call back service
A cost-effective way to improve team efficiency, and the customer experience, is to use call-backs to defer calls until a later time. 

There are two options for call backs.

i. Virtual queuing – whereby the customer's place in the queue is held and called when her turn arrives
ii. A timed call-back, where customers are offered a selection of time slots for their call-back. This option allows you to schedule the call back to take place during a "trough" period, and so keeps staff productive during quieter periods.

Activate python virtualenv in Dockerfile - How to perform it ?

This article covers how to Activate python virtualenv in Dockerfile.

Basically, to package Python application in a Docker image, we often use virtualenv. However, to use virtualenv, we need to activate it.
Therefore, there is no point in using virtualenv inside a Docker Container unless you are running multiple apps in the same container, if that's the case I'd say that you're doing something wrong and the solution would be to architect your app in a better way and split them up in multiple containers.

There are perfectly valid reasons for using a virtualenv within a container.
You don't necessarily need to activate the virtualenv to install software or use it.
Try invoking the executables directly from the virtualenv's bin directory instead:

FROM python:2.7
RUN virtualenv /ve
RUN /ve/bin/pip install somepackage
CMD ["/ve/bin/python", "yourcode.py"]


One solution is to explicitly use the path to the binaries in the virtualenv.

In this case we only have two repetitions, but in more complex situations you’ll need to do it over and over again.
Besides the lack of readability, repetition is a source of error.
As you add more calls to Python programs, it's easy to forget to add the magic /opt/venv/bin/ prefix.
It will (mostly) work though:
FROM python:3.8-slim-buster
RUN python3 -m venv /opt/venv
# Install dependencies:
COPY requirements.txt .
RUN /opt/venv/bin/pip install -r requirements.txt
# Run the application:
COPY myapp.py .
CMD ["/opt/venv/bin/python", "myapp.py"]
The only caveat is that if any Python process launches a sub-process, that sub-process will not run in the virtualenv.

Install Bcrypt in Docker and resolve related errors

This article covers how to install Bcrypt in Docker and fix relating Docker errors.

To fix bcrypt error on Docker:

The error looks like this,

internal/modules/cjs/loader.js:807
app_1 | return process.dlopen(module, path.toNamespacedPath(filename));

To resolve, simply Add the following lines of code to the start.sh file,

#!/usr/bin/env bash

# install new dependencies if any
npm install
# uninstall the current bcrypt modules
npm uninstall bcrypt
# install the bcrypt modules for the machine
npm install bcrypt
echo "Starting API server"
npm start

Here,

i. npm uninstall bcrypt would remove bcrypt modules for the other operating system.
ii. npm install bcrypt would install for the current machine that the app would be running on.

NRPE: No Output Returned From Plugin - How to fix this Nagios error ?

This article covers how to resolve Nagios error, NRPE: No Output Returned From Plugin. This error happens as a result of Permissions or Missing plugin.

To fix this Nagios error:

1. The most common solution is to check the permissions on the check_nrpe binary on the Nagios XI server:

ls -la /usr/local/nagios/libexec/check_nrpe

The expected permissions should resemble:

-rwxrwxr-x. 1 nagios nagios  75444 Nov 21 01:38 check_nrpe

2. If not, change ownership to user/group "nagios" and fix up the permissions:

$ chown nagios:nagios /usr/local/nagios/libexec/check_nrpe
$ chmod u+rwx /usr/local/nagios/libexec/check_nrpe
$ chmod u+rx /usr/local/nagios/libexec/check_nrpe

Cloud SQL Proxy error – An attempt was made to access a socket

This article covers methods to resolve Cloud SQL Proxy error. 

The Cloud SQL Proxy error looks like this:

An attempt was made to access a socket in a way forbidden by its access permissions


Therefore you cannot bind to it while it's running:

The SQL Server engages the 3306 port locally. 

1. Stop the SQL Server to bind the CloudSQL Proxy to it.

2. Incase of Windows 10: Go to Task Manager -> Services -> MySQL57

3. Right click and stop that task. 

4. Once you have done that try running the same command again. 

It'll work and show as output:

Listening on 127.0.0.1:3306 for <instance-name>


Requirements for using the Cloud SQL Auth proxy.

To use the Cloud SQL Auth proxy, you must meet the following requirements:

1. The Cloud SQL Admin API must be enabled.

2. You must provide the Cloud SQL Auth proxy with Google Cloud authentication credentials.

3. You must provide the Cloud SQL Auth proxy with a valid database user account and password.

4. The instance must either have a public IPv4 address, or be configured to use private IP.

The public IP address does not need to be accessible to any external address (it does not need to be added as an authorized network address).

Plesk error Access to the path is denied - Fix it Now

This article covers Plesk Access to the path is denied error. For instance, while trying to access http://example.com/testfolder/test.aspx, it produces the error:

Access to the path %plesk_vhosts%example.com\httpdocs\testfolder is denied

This signifies that the Required system user and/or permissions are not configured properly for the directory %plesk_vhosts%example.com\httpdocs\testfolder in Plesk.


To fix this Plesk error:

1. Connect to the server via RDP.

2. When default permissions on the domain folder are lost, the following actions can be performed to restore them:

i. For Plesk 12.5, Onyx and Obsidian:

"%plesk_cli%\repair.exe" --repair-webspace-security -webspace-name example.com

If it is necessary to repair permissions for all domains, the following command should be used:

"%plesk_cli%\repair.exe" --repair-all-webspaces-security

Also, Plesk Reconfigurator could be used: in the Windows Start menu, select All Programs > Plesk > Plesk Reconfigurator and select Repair Plesk installation > Plesk Virtual Hosts Security > Check .

ii. For Plesk before 12.5:

Go to the Domains page, mark the required domains, and click on the Check permissions button.

Then uncheck the Check-only mode checkbox and click OK .

Plesk error PHP has encountered an Access Violation - Fix it now

This article covers how to fix PHP has encountered an Access Violation which occurs in the Windows server with the Plesk control panel. 

Do a copy libmysql.dll from C:\Program Files (x86)\SWsoft\Plesk\Additional\PleskPHP5 to C:\WINDOWS\system32

Wait for few minutes and it should fix the issue.


To fix PHP has encountered an Access Violation at XXXXX in Plesk:

1. Connect to the server via SSH.

2. Create a backup of the psa database:

plesk db dump psa > /root/psa_backup.sql

3. Download the attached script:

$ wget https://plesk.zendesk.com/hc/article_attachments/115001860533/script_kb213376309

4. Make the script executable:

$ chmod +x script_kb213376309

5. Launch the script for the affected subscription:

Note: change the "example.com" website in the command below to the correct one.

$ ./script_kb213376309 example.com

6. If an error like below appears:

ERROR 1062 (23000) at line 5: Duplicate entry '123-789' for key 'PRIMARY' exit status 1

find the duplicate record in the database:

Note: change the "123" ipCollectionId in the command below to the correct one based on the error message regarding the duplicate entry.

plesk db "select * from IpAddressesCollections where ipCollectionId=123;"

7. Remove the duplicate record from the database:

Note: change the "123" ipCollectionId in the command below to the correct one based on the error message regarding the duplicate entry

Change the "456" ipAddressId in the command below to the correct one based on the output from the previous step.

$ plesk db "delete from IpAddressesCollections where ipCollectionId=123 and ipAddressId=456;"

Nagios error Service check timed out after n Seconds

This article covers how to resolve 'Service check timed out after n Seconds' error for our customers.

You can increase the timeout on the check, though you will have to alter the check in XI and the plugin timeout in the ncpa.cfg file on the remote host.


If it is related to Nagios XI check_xi_ncpa Timeout:

This timeout is how long the check_xi_ncpa command on the Nagios XI server will wait for a response from the NCPA agent. 

By default the timeout is not set, thereby defaulting to the plugin timeout or the global timeout.

1. In the Nagios XI web interface navigate to Configure > Core Config Manager > Commands. 

2. This brings up the Commands page, use the Search field to search for ncpa and click Search.

3. Click the check_xi_ncpa command.

4. You can change the timeout in Nagios XI with the switch -T in the check_xi_ncpa command.

5. In the Command Line, add -T <time value in seconds> after $HOSTADDRESS$. Ex. -T 120

6. Save your changes and then click the Apply Configuration button.

Nagios NDOUtils Message Queue Exceeded – Fix it Now

This article covers how to resolve the Nagios error, NDOUtils: Message Queue Exceeded error occurs when the amount of messages increases.

NDOUtils uses the operating system kernel message queue. As the amount of messages increases the kernel settings need to be tuned to allow more messages to be queued and processed.

A flood of messages in the /var/log/messages related to ndo2db like:

ndo2db: Error: max retries exceeded sending message to queue. Kernel queue parameters may neeed to be tuned. See README.
ndo2db: Warning: queue send error, retrying... 


Nature of this Nagios error:

In Nagios you experience the following symptoms:

1. Missing hosts or services or status data

2. Takes a very long time to restart the Nagios process

3. Unusually high CPU load



How to fix Nagios error, NDOUtils: Message Queue Exceeded ?

The following commands are for the msgmni option. 

For the grep command you executed previously:

i. If it did not return output, this command will add the setting to the /etc/sysctl.conf file:

$ echo 'kernel.msgmni = 512000' >> /etc/sysctl.conf

2. If it did return output, this command will update the setting in the /etc/sysctl.conf file:

$ sed -i 's/^kernel\.msgmni.*/kernel\.msgmni = 512000/g' /etc/sysctl.conf

3. After making those changes, execute the following command:

$ sysctl -p

4. You need to restart services using the commands below:

$ systemctl stop nagios.service
$ systemctl restart ndo2db.service
$ systemctl start nagios.service

Manage Transaction Log File during Data Load - Do it Now

This article covers how to manage transaction log files in SQL Server for our customers. A transaction log is a file – integral part of every SQL Server database. It contains log records produced during the logging process in a SQL Server database.

The transaction log is the most important component of a SQL Server database when it comes to the disaster recovery – however, it must be uncorrupted.

The only way to truncate the log, so the space can be reused, is to perform a SQL transaction log backup. Therefore the most common reason for a transaction log file to have grown extremely large is because the database is in the FULL recovery model and LOG backups haven't been taken for a long time.


How do I stop a transaction log from filling up?

1. To keep the log from filling up again, schedule log backups frequently. 

2. When the recovery mode for a database is set to Full, then a transaction log backup job must be created in addition to backing up the database itself.


To fix a transaction log for a database is full:

1. Backing up the log.

2. Freeing disk space so that the log can automatically grow.

3. Moving the log file to a disk drive with sufficient space.

4. Increasing the size of a log file.

5. Adding a log file on a different disk.

Set Up Amazon WorkSpaces - Step by Step Process

This article covers how to set up Amazon WorkSpaces for our customers. Amazon WorkSpaces is a managed, secure Desktop-as-a-Service (DaaS) solution. With Amazon WorkSpaces, your users get a fast, responsive desktop of their choice that they can access anywhere, anytime, from any supported device.

With Amazon WorkSpaces, you can provision virtual, cloud-based Microsoft Windows or Amazon Linux desktops for your users, known as WorkSpaces.

Generally, Workspaces are meant to reduce clutter and make the desktop easier to navigate. Workspaces can be used to organize your work. For example, you could have all your communication windows, such as e-mail and your chat program, on one workspace, and the work you are doing on a different workspace.

The Amazon WorkSpaces Free Tier provides two Standard bundle WorkSpaces with 80 GB Root and 50 GB User volumes, running in AutoStop mode, for up to 40 hours of combined use per month, for two calendar months, from the time you create your first WorkSpace.


Key Features of Amazon WorkSpaces:

1. The end-users can access the documents, applications, and resources using devices of their choice such as laptops, iPad, Kindle.

2. Network Health Check-Up verifies if the network and Internet connections are working. Also, checks if  WorkSpaces and their associated registration services are accessible, and checks if port 4172 is open for UDP and TCP access or not.

3. Client Reconnect feature allows the users to access their WorkSpace without entering their credentials every time when they disconnect.

4. Auto Resume Session feature allows the client to resume a session that gets disconnected due to any reason in network connectivity within 20 minutes by default. This can be extended for 4 hours. However, the users can disable this feature any time in the group policy section.

5. Console Search feature allows Administrators to search for WorkSpaces by their user name, bundle type, or directory.


Some AWS Limitations:

1. AWS service limits. AWS service limits are set by the platform.

2. Technology limitations. An exceptional characteristic of this limiting factor is that it can be applied to all Cloud services, not just on AWS.

3. Lack of relevant knowledge by your team.

4. Technical support fee.

5. General Cloud Computing issues.

Setup and Configure tmpmail - Step by Step Process

This article covers method to Setup and Configure tmpmail.

Basically, tmpmail is a handy utility for CLI warriors within the command line. 

By default, email addresses are created at random unless a specific email address follows the --generate flag.

Currently, w3m renders the emails in an HTML format within the terminal. If preferred, a user can use a GUI or text-based browser to view the email by passing the --browser flag followed by the command needed to launch the web browser of your choice.

TEMP-MAIL does not store your IP-address. This means you are reliably protected from all unauthorized actions that may endanger your information and compromise your privacy. All emails and data temporarily stored on our service are permanently deleted after the time expired.


How to install tmpmail ?

1. To install tmpmail, we can use the wget command or curl command to download the script from GitHub. 

Next, open a terminal and then copy or type in the following command:

# wget https://raw.githubusercontent.com/sdushantha/tmpmail/master/tmpmail

2. Now, run the chmod command against the script to modify the permissions, so the file is executable.

[root@host2 ~]# chmod -v +x tmpmail

3. Next, we will move the file to a location somewhere in our $PATH. Use the following command to accomplish this.

# mv tmpmail /bin/
# which tmpmail
/usr/bin/tmpmail


To Generate a New tmpmail Address:

To create a new temporary email address, run the following command.

# tmpmail --generate

WDS deployment in Virtual machines – Few test cases

This article covers some methods to test Windows deployment in virtual machines. Windows Deployment Services (WDS) enables you to deploy Windows operating systems over the network, which means that you do not have to install each operating system directly from a CD or DVD.


To install Windows Deployment Services:

Windows Deployment Services ships as an innate role of Windows Server. I will be demonstrating on WS2016. All currently-supported versions provide it and you follow nearly the same process on each of them.

1. Start in Server Manager. Use the Add roles and features link on the main page (Dashboard) or on the Manage drop-down.

2. Click Next on the introductory page.

3. Choose Role-based or feature-based installation.

4. On the assumption that you're running locally, you'll only have a single server to choose from. If you've added others, choose accordingly.

5. Check Windows Deployment Services.

6. Immediately upon selecting Windows Deployment Services, you’ll be asked if you’d like to include the management tools. Unless you will always manage from another server, leave the box checked and click Add Features.

7. Click Next on the Select server roles page and then click Next on the Select server features page (unless you wish to pick other things; no others are needed for this walkthrough).

8. You'll receive another informational screen explaining that WDS requires further configuration for successful operation. Read through for your own edification. You can use the mentioned command line tools if you like, but that won't be necessary.

9. You will be asked to select the components to install. Leave both Deployment Server and Transport Server checked.

10. Click Install on the final screen and wait for the installation to finish.


To create WDS Boot Images:

When a system starts up and PXE directs it to the WDS server, it first receives a boot image. The boot image should match the operating system it will deploy.

You can obtain one easily.

1. Find the DVD or ISO for the operating system that you want to install. Look in its Sources folder for a file named boot.wim. 

2. On your WDS server, right-click the Boot Images node and click Add Boot Image.

3. On the first page of the wizard, browse to the image file. You can load it right off the DVD as it will be copied to the local storage that you picked when you configured WDS.

4. You’re given an opportunity to change the boot image’s name and description. I would take that opportunity, because the default Microsoft Windows Setup (x##) won’t tell you much when you have multiples.

5. You will then be presented with a confirmation screen. Clicking Next starts the file copy to the local source directory. After that completes, just click Finish.

Disable NetBIOS and LLMNR Protocols in Windows Using GPO - How to do it

This article covers how to disable NetBIOS and LLMNR Protocols for our customers. The broadcast protocols NetBIOS over TCP/IP and LLMNR are used in most modern networks only for compatibility with legacy Windows versions. Both protocols are susceptible to spoofing and MITM attacks. 

In the Metasploit there are ready-made modules that allow you to easily exploit vulnerabilities in the broadcasting NetBIOS and LLMNR protocols to intercept user credentials in the local network (including NTLMv2 hashes). 

To improve your network security, you need to disable these protocols on the domain network. 


In the domain environment, LLMNR broadcasts can be disabled on computers and servers using Group Policy. 

To do it:

1. Open the gpmc.msc, create a new GPO or edit an existing one that is applied to all workstations and servers;

2. Go to Computer Configuration -> Administrative Templates -> Network -> DNS Client;

3. Enable Turn off smart multi-homed name resolution policy by changing its value to Enabled;

4. Wait while the GPO settings on clients are updated, or manually update them using the command: gpupdate /force.


To manually disable NetBIOS on Windows as follows:

1. Open network connection properties

2. Select TCP/IPv4 and open its properties

3. Click Advanced, then go to WINS tab and select Disable NetBIOS over TCP

4. Save the changes.

DirectAdmin Email page failed to load - Fix it Now

This article covers how to fix the issue regarding the email page not loading in the DirectAdmin panel.


To fix this DirectAdmin error:

1. Edit DirectAdmin.conf

$ vi /usr/local/directadmin/conf/directadmin.conf

2. Add below line into the file to enable disk usage cache:

pop_disk_usage_cache=1

3. Edit /etc/cron.d/directadmin_cron

$ vi /etc/cron.d/directadmin_cron

and add below line

*/15 * * * * root echo "action=cache&type=popquota" >> /usr/local/directadmin/data/task.queue

4. Restart crond service

$ service crond restart

Nagios Network Analyzer - My New Source Wont Start - Best Fix

This article covers method to resolve Source Not Starting in Nagios for our customers. Generally, it happens when you added a new source, but it did not automatically start.

When creating a new Source in Network Analyzer it creates the directory structure - the folders where it will store flow data, the RRD data file, and the processes pid file. It also starts the Source (nfcapd or sfcapd) automatically once it's finished creating the new directories. Here's a couple reasons why it may not be starting.


This problem can be resolved by installing the rrdtool-python module the following command:

$ yum install -y rrdtool-python

Once installed restart the nagiosna service:

$ systemctl restart nagiosna

The Source should now start

Unable to find the User entry – Fix Apache Web Agent Installation Error

This article covers how to fix Unable to find the User entry Apache Web Agent Installation Error.

This arror happens when we fail to set the user and group in the Apache httpd.conf file. Also, You will see "Unable to find the "User" entry in the httpd.conf file, will try APACHE_RUN_USER environment variable" and/or "Unable to find the "Group" entry in the httpd.conf file, will try APACHE_RUN_GROUP environment variable" errors.

To resolve this Apache error:
1. Check whether the user and group are set; you can do this via the httpd.conf file or equivalent file (such as envvars). For example:
a. Review the httpd.conf file and check whether the user and group are set. By default, they are set to apache, for example:

$ cat httpd.conf | grep 'User\|Group'
...
User apache
Group apache
..

If they are not set, you should set them; you can set them to apache or nobody.
b. Review the envvars file to ensure the user and group are set in the APACHE_RUN_USER and APACHE_RUN_GROUP environment variables. For example:

$ cat envvars | grep 'APACHE_RUN_USER\|APACHE_RUN_GROUP'
export APACHE_RUN_USER=apache
export APACHE_RUN_GROUP=apacheIf they are not set, you should set them; you can set them to apache or nobody.

2.     Review the passwd and group files to check whether the user and group match what is set in your httpd.conf file or equivalent. For example:

$ cat /etc/passwd | grep apache
apache:x:48:48:apache:/usr/share/httpd:/sbin/nologin
$ cat /etc/group | grep apache
apache:x:48:


If they are not set, you should set them to match what is in the httpd.conf file or equivalent.

Nagios Failed to Parse Date Error - Fix it now

This article covers Nagios Failed to Parse Date Error.

Basically, the logs coming in on the same input need to use the same formatting.
To fix this Nagios error,  make sure that all devices use the same date format or configure another input for these devices.
For example:

syslog {
    port => xxxx
    type => 'alternative-syslog'
    tags => 'alternative Linux-Max'
}

Attributes do not match - Fix this SQL Server Installation Error

This article covers how to resolve the error, SQL Attributes do not match. Basically, the error, Attributes do not match occurs during SQL Server installation or during SQL Server patching activity.
Once you get this issue, you can check all drives available on your database server whether they are compressed or not. If any drive is compressed and SQL Server is using that drive during installation then that might be the reason for getting issue Attributes do not match.
You need to uncompress all such drives and then start the installation.

To uncompress the drive we need to launch property window of that drive:
1. We just need to right click on the identified drive and choose Properties to see the status of compression.
2. You can see compress this drive to save disk space option is ticked.
3. Uncheck this option and click on OK button to apply the change.
4. Once you validated all drives that none of the drives are compressed then you can start SQL Server installation and this time SQL Server installation will be successful.

When installing SQL server and running into an error "Attributes do not match.
Present attributes (Directory, Compressed) , included attributes (0), excluded attributes (Compressed, Encrypted)".
This is because you are trying to install SQL into a folder that is compressed which is not supported.
To fix this;
1. Navigate to C:\Program Files\Microsoft SQL server folder
2. Right click on properties on SQL Server folder.  
3. Under advanced option in general tab, look for compress contents to save disk space, uncheck it.
4. Also, uncheck "encrypt contents to secure data"
5. Re-run the SQL install

Nagios No lock file found - Fix this error now

This article covers different methods to resolve the error, Nagios: No lock file found.  Basically, "No lock file found in /usr/local/nagios/var/nagios.lock" means that the service isn't running.

To fix this Nagios error:
Execute the command:

$ /usr/local/nagios/bin/nagios -d /usr/local/nagios/etc/nagios.cfg

Running the command above simply starts the nagios daemon and points it to a specific config file.
The advantage to running this command manually over systemd is that when you run "service nagios start" this typically calls the /etc/rc.d/init.d/nagios script which contains a line with parametrized environment variables:

$NagiosBin -d $NagiosCfgFile

Because every system is different, not specifying either the bin nor config directories could lead to nagios breaking (stopping) when it tries to start using the default installation directory paths

Administration Page Fails To Display in Nagios Log Server

This article covers how to resolve the 'Administration Page Fails To Display' in the Nagios Log Server issue for our customers.


To fix this Nagios error, all you need to do is to:
1. Increase the PHP  memory_limit in php.ini file.
You can execute the following command:

$ find /etc -name php.ini

2. Then make the necessary changes.
3. After which you should restart Apache for the changes to take effect using one of the commands below:

$ systemctl restart apache2.service

4. Once the service has restarted, the Administration page will be accessible.

If the problem persists, please increase the value again.


When using the vi editor in Linux:
1. To make changes press i on the keyboard first to enter insert mode
2. Press Esc to exit insert mode
3. When you have finished, save the changes in vi by typing :wq and press Enter

No SSL library support - How to fix this Web Agent installation error

This article covers methods of resolving No SSL/library support: Web Agent installation error. This issue arises when you are trying to install a 32bit version of the agent on a 64bit system; the 32bit version of the agentadmin tool cannot open the 64bit SSL libraries.
Therefore, If your operating system does not include native openssl packages, you must install OpenSSL.

To fix this Web Agent installation error on Linux:
1. Ensure you are installing the appropriate version of the agent; if you have a 64bit operating system, you must install the 64bit agent.
2. Ensure either the operating system provides native openssl packages or OpenSSL is installed. If you are using OpenSSL, you can check that the OpenSSL libraries are in the correct location as follows and add them if they are missing:
a. Check that the LD_LIBRARY_PATH environment variable is set. For example: $ echo $LD_LIBRARY_PATH
b. Check that the OpenSSL libraries (libcrypto.so and libssl.so) are available in the path specified in this environment variable (LD_LIBRARY_PATH).

Configure Multi-Tenancy in Nagios Log Server - How to perform it

This article covers how to configure multi-tenancy in Nagios log server.
Multi-Tenancy works by assigning which hosts a user is allowed to see in the Nagios Log Server interface.
Hosts can also be placed in a host list and then applied to the users who will be allowed access.
NOTE: API users and administrators will be able to work around any restrictions placed on them, this
functionality only applies for regular users.

JFTP Bad response error in Joomla - Best Method to resolve

This article covers how to fix JFTP bad response error in Joomla for our customers.
When trying to install new extensions in Joomla, some users might come across some errors indicating a “Bad Response”, where the extensions are not successfully installed.

These errors include:

-JFTP::mkdir: Bad response
-JFTP::chmod: Bad response
-JFTP::store: Bad response

To fix this error, You could try to change the chmod permission to 777 (755 the default chmod) of your configuration.php file, and also the corresponding directory recursively.
To do this, modify the configuration.php file.
Simply search for the FTP settings within this file and input the FTP login details in the following fields:
public $ftp_host = '';
public $ftp_port = ’21’;
public $ftp_user = ”;
public $ftp_pass = ”;
public $ftp_root = ”;
public $ftp_enable = ‘1’;

Manage Scheduled Tasks with PowerShell - How to do it

This article covers how to use the PowerShell features to create scheduled tasks. The Get-ScheduledTask cmdlet gets the task definition object of a scheduled task that is registered on a computer. You can use PowerShell to create and manage scheduled tasks. Managing scheduled tasks with PowerShell is made possible with the use of the ScheduledTasks module that’s built-in to Windows.
With the PowerShell Scheduled Tasks module, setting up scheduled tasks using PowerShell commands is made possible. This module provides the opportunity and means to create and deploy scheduled tasks programmatically on the local and remote computers.

Important scheduled task component:
1. Action – the action that is executed by the scheduled task. An action is typically to run a program or a script. A scheduled task can have more than one actions.
2. Trigger – controls when the scheduled task runs. Triggers can be time-based, like, setting a schedule for daily or hourly recurrence. Triggers can also be activity-based, which runs a task based on detected activities like computer startup, a user logs in, or logged events.
3. Principal – controls the security context used to run the scheduled task. Among other things, a principal includes the user account and the required privilege used by the scheduled task.
4. Settings – is a set of options and conditions that controls how the scheduled task behavior. As an example, you can customize a task to get removed after a consecutive number of days that the task is unused.

To add a Trigger for a scheduled task using PowerShell:
The cmdlet to use for creating a trigger is the New-ScheduledTaskTrigger cmdlet.
The command below creates a trigger to run daily at 3 PM.

Copy and run the code in PowerShell:

# Create a new trigger (Daily at 3 AM)
$taskTrigger = New-ScheduledTaskTrigger -Daily -At 3PM
$tasktrigger

This will Create a Trigger (Daily at 3 AM)

Enable Windows Lock Screen after Inactivity via GPO - How to do it

This article covers how to Enable Windows Lock Screen on domain computers or servers using Group Policy. Locking the computer screen when the user is inactive (idle) is an important information security element.
The user may forget to lock his desktop (with the keyboard shortcut Win + L) when he needs to leave the workplace for a short time.
If any other employee or client who is nearby can access his data. The auto-lock screen policy will fix this flaw.
After some time of inactivity (idle), the user's desktop will be automatically locked, and the user will need to re-enter their domain password to return to the session.

To enable lock screen with group policy:
1. Create a new GPO then edit it and go to:
Computer Config>Policies>Windows Settings>Security Settings>Local Policies>Security Options.
2. Find Interactive logon: Machine inactivity limit .
3. Set that to whatever time you want and it will lock the PC after it hits that timer.

To change my lock screen wallpaper using group policy:
1. Run GPEDIT. MSC.
2. Go this path "Computer Configuration\Policies\Administrative Templates\Control Panel\Personalization".
3. Enable the GP "Force a specific default lock screen image".
4. Specify the path to the image file.
5. Click OK.

To Find Windows 10's Spotlight Lock Screen Pictures:
1. Click View in File Explorer.
2. Click Options.
3. Click the View tab.
4. Select "Show hidden files, folders and drives" and click Apply.
5. Go to This PC > Local Disk (C:) > Users > [YOUR USERNAME] > AppData > Local > Packages > Microsoft.Windows.ContentDeliveryManager_cw5n1h2txyewy > LocalState > Assets.

Boot an EC2 Windows instance into DSRM - How to perform this task

This article covers how to boot an EC2 Windows instance into DSRM. If an instance running Microsoft Active Directory experiences a system failure or other critical issues you can troubleshoot the instance by booting into a special version of Safe Mode called Directory Services Restore Mode (DSRM). In DSRM you can repair or recover Active Directory.


How to Configure an Instance to Boot into DSRM?

1. To boot an online instance into DSRM using the System Configuration dialog box

i. In the Run dialog box, type msconfig and press Enter.

ii. Choose the Boot tab.

iii. Under Boot options choose Safe boot.

iv. Choose Active Directory repair and then choose OK. The system prompts you to reboot the server.


2. To boot an online instance into DSRM using the command line

From a Command Prompt window, run the following command:

$ bcdedit /set safeboot dsrepair

If an instance is offline and unreachable, you must detach the root volume and attach it to another instance to enable DSRM mode.

Cannot download Docker images behind a proxy - Fix it Now

This article covers the error, Cannot download Docker images behind a proxy. 

You can fix this docker issue by doing the following:

1. In the file /etc/default/docker, add the line:

export http_proxy='http://<host>:<port>'

2. Restart Docker:

$ sudo service docker restart


Also, you can Follow the steps given below to fix this docker error:

1. Create a systemd drop-in directory for the docker service:

$ mkdir /etc/systemd/system/docker.service.d

2. Create a file called /etc/systemd/system/docker.service.d/http-proxy.conf and add the HTTP_PROXY env variable:

[Service]
Environment="HTTP_PROXY=http://proxy.example.com:80/"

3. If you have internal Docker registries that you need to contact without proxying you can specify them via the NO_PROXY environment variable:

Environment="HTTP_PROXY=http://proxy.example.com:80/"
Environment="NO_PROXY=localhost,127.0.0.0/8,docker-registry.somecorporation.com"

4. Flush changes:

$ sudo systemctl daemon-reload

5. Verify that the configuration has been loaded:

$ sudo systemctl show --property Environment docker
Environment=HTTP_PROXY=http://proxy.example.com:80/

6. Restart Docker:

$ sudo systemctl restart docker

Define Global Environment Variables in Nagios - Fix it Now

This article covers how to define Global Environment Variables in Nagios. In some environments, when the plugin is executed by the monitoring engine, these environment variables are not loaded and hence the plugin does not know where to find them and fails.

Therefore, when the plugin is executed, environment variables may not load.


To Define Global Environment Variables in Nagios:

Here, you will define variables required for you plugins globally.

a. Add the path /usr/local/important_application to the PATH environment

b. Add the variable ORACLE_HOME=/usr/lib/oracle/11.2/client64

1. To do this, edit a specific file that nagios checks when it starts:

$ vi /etc/sysconfig/nagios

2. Add the following lines to the file and then save:

export PATH=$PATH:/usr/local/important_application
export ORACLE_HOME=/usr/lib/oracle/11.2/client64

3. Finally, restart Nagios:

$ systemctl restart nagios.service


If you have Mod-Gearman, 

The following is being applied:

a. Re-define the PATH environment to include /usr/local/important_application

b. Add the variable ORACLE_HOME=/usr/lib/oracle/11.2/client64

1. This is performed by editing a specific file that mod-gearman2-worker checks when the service starts:

/etc/sysconfig/mod-gearman2-worker

2. Open an SSH session to your Mod-Gearman worker.

Type:

$ vi /etc/sysconfig/mod-gearman2-worker

3. Add the following lines to the end of the file and save:

PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/usr/local/important_application
ORACLE_HOME=/usr/lib/oracle/11.2/client64

4. Now reload the daemons and restart the Mod-Gearman worker:

$ systemctl daemon-reload
$ systemctl restart mod-gearman2-worker.service

Bad interpreter No such file or directory error in Nagios

This article covers Nagios error, 'bad interpreter no such file or directory'. Nagios bad interpreter: No such file or directory error occurs after uploading a plugin which is in a "Windows" format instead of a "Unix" format. It has to do with the line endings / carriage returns.

To fix this error, you will convert the file to a Unix format:

$ yum install -y dos2unix
$ dos2unix /usr/local/nagios/libexec/check_apc_pdu_load.sh

Nagios error Could not bind to the LDAP server - Fix it now

This article covers tips to resolve 'could not bind to the LDAP server. Nagios error. This cause secure lookup on 636 or using TLS to fail.

The check_ldap plugin makes use of OpenLDAP, the OpenLDAP package is installed as part of the NagiosXI installation because the plugins have dependencies on it but it is left in a non-configured state.

To resolve the problem on each node (wtgc-nagios-01 and wtgc-nagios-02) the following is required, firstly edit the file: /etc/openldap/ldap.conf and at the bottom of the file add the following line:

TLS_REQCERT allow

330 Content Decoding Failed - Nagios Web browser error

This article covers how to fix 330 Content Decoding Failed Nagios browser error. Basically, this error occurs when an HTTP request's headers claim that the content is gzip encoded, but it is not. 

To fix this error:

The Apache web server requires zlib.output_compression to be configured to On in the /etc/php.ini file.

Execute the following command to open the file in vi:

vi /etc/php.ini

When using the vi editor, to make changes press i on the keyboard first to enter insert mode.

Press Esc to exit insert mode.

To locate the line zlib.output_compression = type the following:

/output_compression =

This should take you directly to the line. Change the setting to On:

zlib.output_compression = On

When you have finished, save the changes in vi by typing:

:wq

and press Enter.

The last step is to restart the Apache service using one of the commands below:

RHEL 7 + | CentOS 7 + | Oracle Linux 7 +

$ systemctl restart httpd.service

 Debian | Ubuntu 16/18/20

$ systemctl restart apache2.service

After the service has restarted the problem should no longer occur.

Clear Solaris Service Maintenance Status in Nagios - Troubleshoot and Resolve

This article covers how to fix Clear Solaris Service Maintenance Status Nagios issue. Basically, When the Nagios Core service finds an invalid configuration, the core service will not start. 

To fix the problem you must fix the problem Nagios Core is complaining about.

This is normal behavior of Nagios Core, it is not specific to Solaris.

However on Solaris, after a service has failed to start several times, Solaris will put the service into what is called a Maintenance State. This state prevents a small problem from becoming a bigger problem. 

Even after fixing the problem Nagios Core is complaining about, you must also clear the maintenance state on the service before Solaris allows a service to be started again.

This means that the service is in a maintenance state, however there is not a lot of detail as to the cause of the issue except that the Start method failed repeatedly. 

It does however provide the name of a log file /var/svc/log/application-nagios:default.log.

Execute the following command to perform further troubleshooting:

tail -20 /var/svc/log/application-nagios:default.log


To Clear Maintenance State on Nagios:

1. Run the following command to clear the maintenance state:

$ svcadm clear nagios

2. Execute the following command to start Nagios:

$ svcadm enable nagios

3. Now check the state of the service:

$ svcs -xv nagios

Failed to register iobroker in Nagios - Solved

This article covers how to resolve Nagios error, Failed to register iobroker. This problem can occur when custom operating system limits restrict the max number of processes that can be executed.


Custom limits are defined in the /etc/security/limits.conf file

You will need to increase the hard and soft values to resolve the problems you are experiencing, for example:

# harden against fork-bombs
*               hard    nproc           10000
*               soft    nproc           10000
root            hard    nproc           10000
root            soft    nproc           10000

 After making the changes it is recommended to reboot the operating system to ensure the limits are applied.

If the change does not fix the problem then you should increase the values again.

Secure osTicket with Lets Encrypt SSL Certificates - Do it Now

This article covers how secure osTicket with Let’s Encrypt SSL Certificates. You can use the Certbot to request for SSL certificates from Let's Encrypt Certificate Authority. The tool is not available by default and will need to be installed manually.


To Install certbot certificate generation tool:

1. Install certbot on Ubuntu /Debian:

# Install certbot on Ubuntu /Debian

$ sudo apt update

# Apache

$ sudo apt-get install python-certbot-apache

# Nginx

$ sudo apt-get install python-certbot-nginx


2. Install certbot on CentOS 8 / CentOS 7:

On a CentOS system run either of the following commands:

# CentOS 8

## For Apache

$ sudo yum -y install python3-certbot-apache

## For Nginx

$ sudo yum -y install python3-certbot-nginx

# CentOS 7

## For Apache

$ sudo yum -y install python2-certbot-apache

## For Nginx

$ sudo yum -y install python2-certbot-nginx

Updating Windows VM Templates on VMWare with PowerShell - How to do it

This article covers how to update Windows VM Templates on VMWare. 

The update process of a VM template on VMWare consists of the following stages:

1. A template from the Content Library is converted to a virtual machine.;

2. After starting it, an administrator logs on, installs approved Windows updates using WSUS, updates the required software;

3. After the updates have been installed, the VM is restarted, then turned of and converted back to the template.

DirectAdmin error Headers and client library minor version mismatch

This article covers how to resolve DirectAdmin: Headers and client library minor version mismatch error. Basically, this error can come up even after MySQL update and PHP rebuild via custombuild.

To resolve this error:

Perform a cleanup in custombuild and rebuild PHP

$ cd /usr/local/DirectAdmin/custombuild

$ ./build clean

$ ./build php n


Alternatively you can set it like this:

cd /usr/local/directadmin/custombuild

./build set php5_ver 5.3

./build set mysql 5.1

./build update

./build clean

./build apache d

./build php d

./build mysql d


To rebuild zend:

cd /usr/local/directadmin/custombuild

./build zend

Virtualization Restrictions in RedHat Linux with KVM

This article covers Virtualization Restrictions in RedHat Linux which are additional support and product restrictions of the virtualization packages.


The following notes apply to all versions of Red Hat Virtualization:

1. Supported limits reflect the current state of system testing by Red Hat and its partners. Systems exceeding these supported limits may be included in the Hardware Catalog after joint testing between Red Hat and its partners. If they exceed the supported limits posted here, entries in the Hardware Catalog are fully supported. In addition to supported limits reflecting hardware capability, there may be additional limits under the Red Hat Enterprise Linux subscription terms. Supported limits are subject to change based on ongoing testing activities.


2. These limits do not apply to Red Hat Enterprise Linux (RHEL) with KVM virtualization, which offers virtualization for low-density environments.


3. Guest operating systems have different minimum memory requirements. Virtual machine memory can be allocated as small as required.

USB device passthrough to Hyper-V - How to get this done

This article covers how to perform Hyper-V USB Passthrough. A USB passthrough is referenced when a keyboard has female USB ports for other devices to be plugged into the keyboard. USB passthroughs requires more than one USB port to be used by the keyboard in order to pass the USB through to the PC, often requiring an additional connection to the host system for powered ports.


What does Hyper-V USB passthrough Mean?

Hyper-V USB passthrough functionality allows you to access the USB device from within a virtual machine. There is a way to enable USB passthrough on Hyper-V for a memory stick, but you’ll have to use Windows storage subsystem.


Known issues with Hyper-V USB device passthrough organized with native methods:

1. Platform restrictions: said methods rely upon Windows storage subsystem so using them to set up Hyper-V USB passthrough on Linux (or any other OS that’s not Windows) is out of question, sadly.

2. An extremely limited list of supported devices: for the native methods to work, your USB peripheral must be recognized as a “Mass Storage Device”. No exceptions.

3. No sharing: once your device is set in passthrough mode, you can access it only from the guest OS. That’s why using these methods to permanently connect a USB to Hyper-V is definitely not the best idea.

4. Poor choice for a cloud: with these methods, the USB device is always locked to a specific host PC while there's no way to anchor a cloud-based Hyper-V guest system or preestimate where it's going to run for your next session.


Advantages of using RDP for Hyper-V USB passthrough:

1. Works for literally any hypervisor you can name;

2. Instant access to USB devices, once RDP connection is up;

3. Group Policy feature for overall control;

4. All USB devices plugged into your host PC are accessible from a virtual machine.


To enable USB device in Hyper-V with the Enhanced Session Mode:

1. On a host computer, you go to the Hyper-V Manager, right-click the name of the host and choose Hyper-V Settings.

2. In the Setting window, you will see the Server and User sections. Select Enhanced Session Mode Policy in the Server section and allow the enhanced session mode by checking the corresponding box.

3. Now, choose "Enhanced Session Mode" in the User section and check the “Use enhanced session mode” box.

4. Click OK and the changes will be saved.


To allow Hyper-V access to attached USB devices:

1. Start the Hyper-V Manager and double-click the name of your virtual machine.

2. In the pop-up window, click “Show Options” to configure your VM’s future connections.

3. After that, go to the tab “Local resources” and click “More” in the section “Local devices and resources”.

4. Then, check the boxes “Other supported Plug and Play devices” and “Devices that I plug in later”. Hit OK.

5. If you want this configuration to be saved for all future connections, check the corresponding box in the “Display” tab. Click “Connect” to implement the changes.

Recover orphan innodb database from ibd file - How to perform this task

This article covers how to recover the orphan InnoDB database from the ibd file. 


Orphan InnoDB database incident mostly happened when:

1 – user accidentally remove ibdata1 file. (mostly in /var/lib/mysql/ibdata1).

2 – ibdata file courrupted.


To Recover Orphaned InnoDB Tables:

1. Restart the MySQL service to recreate ibdata1, then take a backup of your database folder.

2. Login to MySQL.

3. Create a dummy database with the same name. Then, create a dummy table with the same name as the corrupted one (don’t mind the table structure for now).

4. Stop the MySQL service, copy the .frm file from the backup you took to replace the .frm file.

5. Start MySQL and have a look at the structure of your table - it should now be in place! However, don’t get too happy yet.

6. Issue a SHOW CREATE TABLE statement and copy its contents, then create the table with them.

7. Stop MySQL, copy the .ibd file from the backup directory to /var/lib/database_name and replace the existing .ibd file from the dummy table.

8. Now it’s time for Percona’s tools to shine - download and install the Percona Data Recovery Tool for InnoDB , then run the following (here -o represents the full location of your ibdata1 file, -f represents the full location of your .ibd file, -d represents the database name and -t represents your table name):

ibdconnect -o /var/lib/mysql/ibdata1 -f /var/lib/mysql/database_name/table_name.ibd -d database_name -t table_name

9. Now, run a checksum check against InnoDB - make sure you get no error messages (you might need to run this tool several times):

innochecksum /var/lib/mysql/ibdata1

10. Finally, you should be good to go - simply start MySQL.

Remove Nginx on linux in Vesta control panel - Step by Step process to do it

This article covers how to remove Nginx on Linux in the Vesta control panel. 

Vesta control panel (VestaCP) is an open source hosting control panel, which can be used to manage multiple websites, create and manage email accounts, FTP accounts, and MySQL databases, manage DNS records and more.


To uninstall VestaCP on CentOS, follow the steps below:

1. Connect to your server via SSH as root

2. Stop the Vesta service with service vesta stop:

$ service vesta stop 

3. Delete Vesta packages/software repository:

# yum remove vesta*

and

# rm -f /etc/yum.repos.d/vesta.repo

4. You may also want to remove /usr/local/vesta folder:

# rm -rf vesta

5. Now we have to remove the cron jobs for the user admin.

Let's list first the cron jobs:

# crontab -u admin -l
MAILTO=admin@ibmimedia.com
CONTENT_TYPE="text/plain; charset=utf-8"
15 02 * * * sudo /usr/local/vesta/bin/v-update-sys-queue disk
10 00 * * * sudo /usr/local/vesta/bin/v-update-sys-queue traffic
30 03 * * * sudo /usr/local/vesta/bin/v-update-sys-queue webstats
*/5 * * * * sudo /usr/local/vesta/bin/v-update-sys-queue backup
10 05 * * * sudo /usr/local/vesta/bin/v-backup-users
20 00 * * * sudo /usr/local/vesta/bin/v-update-user-stats
*/5 * * * * sudo /usr/local/vesta/bin/v-update-sys-rrd
40 2 * * * sudo /usr/local/vesta/bin/v-update-sys-vesta-all
03 3 * * * sudo /usr/local/vesta/bin/v-update-letsencrypt-ssl

6. Remove the cron jobs via crontab -u admin -e:

# crontab -u admin -e

7. Save and exit:


DirectAdmin User too large delete on background - Methods to resolve this error

This article covers method to fix the error, DirectAdmin: User too large delete on background. Basically, this error occurs when the sum of the disk usage of any user exceeds a certain threshold.

To prevent time-outs in your browser when deleting excessively large accounts, DirectAdmin will execute the deletion by adding the command to the background’s task.queue, instead of performing the execution on the foreground.


To fix DirectAdmin: User too large delete on background error, you can connect to the server through SSH using root access, then go to DirectAdmin's installed directory as below:

cd /usr/local/directadmin/conf/

Then edit the directadmin.conf file in the directory by running

vi directadmin.conf

If the variable "get_background_delete_size" value exists in the directadmin.conf file, it will be set to 10 GB by default (get_background_delete_size=10240). 

If the variable cannot be found in the file, simply add it in. 

You can modify the value of 10240 to define the value that you wish to set.

Best Ubuntu APT Repository Mirror - How to get it

This article covers methods to find the best APT mirror on the Ubuntu server. 


To Find Best Ubuntu APT Repository Mirror Using Apt-smart:

Apt-smart is yet another command line tool written in Python. It helps you to find APT mirrors that provides best download rate for your location. It can smartly retrieve the mirrors by querying the Debian mirror list, Ubuntu mirror list and Linux mint mirror list and choose best mirror based on the country in which the user lives in. The discovered mirrors are ranked by bandwidth and their status (like up-to-date, 3-hours-behind, one-week-behind etc).

Another notable feature of Apt-smart is it will automatically switch to any other different mirrors when the current mirror is being updated. The new mirrors can be selected either automatically or manually by the user. Good thing is Apt-smart will backup the current sources.list before updating it with new mirrors.


To Install Apt-smart in Ubuntu:

Make sure you have installed Pip and run the following commands one by one to install Apt-smart:

$ pip3 install --user apt-smart
$ echo "export PATH=\$(python3 -c 'import site; print(site.USER_BASE + \"/bin\")'):\$PATH" >> ~/.bashrc
$ source ~/.bashrc


To List all mirrors based on rank:

To list all available ranked mirrors in the terminal, run:

$ apt-smart --list-mirrors

Or,

$ apt-smart -l


To Automatically update mirrors:

Instead of manually finding and updating the best mirror in Ubuntu, you can let Apt-smart to choose a best Apt mirror and automatically update the sources.list with new one like below:

$ apt-smart --auto-change-mirror

To get help, run:

$ apt-smart --help

Amazon Redshift - Its features and how to set it up

This article covers an effective method to set up Amazon Redshift. Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the cloud. This enables you to use your data to acquire new insights for your business and customers. The first step to create a data warehouse is to launch a set of nodes, called an Amazon Redshift cluster.

Amazon Redshift is a relational database management system (RDBMS), so it is compatible with other RDBMS applications. Amazon Redshift and PostgreSQL have a number of very important differences that you need to take into account as you design and develop your data warehouse applications.

Amazon Redshift is based on PostgreSQL.

Amazon Redshift is specifically designed for online analytic processing (OLAP) and business intelligence (BI) applications, which require complex queries against large datasets.


What is the difference between Amazon Redshift and Amazon Redshift Spectrum and Amazon Aurora?

Amazon Simple Storage Service (Amazon S3) is a service for storing objects, and Amazon Redshift Spectrum enables you to run Amazon Redshift SQL queries against exabytes of data in Amazon S3.

Both Amazon Redshift and Amazon RDS enable you to run traditional relational databases in the cloud while offloading database administration. 

Customers use Amazon RDS databases primarily for online-transaction processing (OLTP) workload while Redshift is used primarily for reporting and analytics.

Boost performance of Websites using Cloudflare - Tips to implement it

This article covers how to improve the performance of Websites using Cloudflare. Website speed has a huge impact on user experience, SEO, and conversion rates. Improving website performance is essential for drawing traffic to a website and keeping site visitors engaged. 

Along with the caching and CDN, Cloudflare helps protect your site against brute-force attacks and threats against your website.

Cloudflare has the advantage of serving million of websites and so can identify malicious bots and users more easily than any operating system firewall.


CDNs boost the speed of websites by caching content in multiple locations around the world. CDN caching servers are typically located closer to end users than the host, or origin server. Requests for content go to a CDN server instead of all the way to the hosting server, which may be thousands of miles and across multiple autonomous networks from the user. Using a CDN can result in a massive decrease in page load times.


How to get started on optimizing website performance with Cloudfare CDN (content delivery network)?

1. Optimize images

Images comprise a large percentage of Internet traffic, and they often take the longest to load on a website since image files tend to be larger in size than HTML and CSS files. Luckily, image load time can be reduced via image optimization. Optimizing images typically involves reducing the resolution, compressing the files, and reducing their dimensions, and many image optimizers and image compressors are available for free online.

2. Minify CSS and JavaScript files

Minifying code means removing anything that a computer doesn't need in order to understand and carry out the code, including code comments, whitespace, and unnecessary semicolons. This makes CSS and JavaScript files slightly smaller so that they load faster in the browser and take up less bandwidth.

3. Reduce the number of HTTP requests if possible

Most webpages will require browsers to make multiple HTTP requests for various assets on the page, including images, scripts, and CSS files. In fact many webpages will require dozens of these requests. Each request results in a round trip to and from the server hosting the resource, which can add to the overall load time for a webpage. 

4. Use browser HTTP caching

The browser cache is a temporary storage location where browsers save copies of static files so that they can load recently visited webpages much more quickly, instead of needing to request the same content over and over. Developers can instruct browsers to cache elements of a webpage that will not change often. Instructions for browser caching go in the headers of HTTP responses from the hosting server.

5. Minimize the inclusion of external scripts

Any scripted webpage elements that are loaded from somewhere else, such as external commenting systems, CTA buttons, or lead-generation popups, need to be loaded each time a page loads.

6. Don't use redirects, if possible

A redirect is when visitors to one webpage get forwarded to a different page instead. Redirects add a few fractions of a second, or sometimes even whole seconds, to page load time

Event Data getting Stale in Nagios - Resolve it Now

This article covers methods to fix Event Data getting Stale in Nagios. Basically, you will see the causes for event data getting stale in Nagios. There is a known bug relating to event data in versions 2009R1.4B-2011R1.1.

This bug has been patched and will be available in releases later than the versions posted above, but if you're experiencing this error, and/or the nagios service is taking an excessively long time to start, you may have a corrupted mysql table that needs repair.


To fix this Nagios error:

1. Stop the following services:

$ service nagios stop
$ service ndo2db stop
$ service mysqld stop

2. Run the repair script for mysql tables:

/usr/local/nagiosxi/scripts/repairmysql.sh nagios

3. Unzip and copy the the following dbmaint file to /usr/local/nagiosxi/cron/. This will overwrite the previous version.

$ cd /tmp
$ wget http://assets.nagios.com/downloads/nagiosxi/patches/dbmaint.zip
$ unzip dbmaint.zip
$ chmod +x dbmaint.php
$ cp dbmaint.php /usr/local/nagiosxi/cron

Delete Repository And GPG Key On Ubuntu Systems

This article covers steps to delete the repository and GPG Key On Ubuntu. All packages are signed with a pair of keys consisting of a private key and a public key, by the package maintainer.

A user's private key is kept secret and the public key may be given to anyone the user wants to communicate.

Whenever you add a new repository to your system, you must also add a repository key so that the APT Package Manager trusts the newly added repository.

Once you've added the repository keys, you can make sure you get the packages from the correct source.


To remove Repository keys:

You can remove the repository key if it is no longer needed or if the repository has already been removed from the system.

It can be deleted by entering the full key with quotes as follows (which has a hex value of 40 characters):

$ sudo apt-key del "D320 D0C3 0B02 E64C 5B2B B274 3766 2239 8999 3A70"
OK

Alternatively, you can delete a key by entering only the last 8 characters:

$ sudo apt-key del 89993A70
OK

Once you have removed the repository key, run the apt command to refresh the repository index:

$ sudo apt update

You can verify that the above GPG key has been removed by running the following command:

$ sudo apt-key list

Guest unable to reach host using macvtap interface - Fix it Now

This article covers how to fix the issue with guests unable to reach the host using macvtap interface.

This issue happens when A guest virtual machine can communicate with other guests, but cannot connect to the host machine after being configured to use a macvtap (also known as type='direct') network interface.


To resolve this error (guests unable to reach the host using macvtap interface), simply create an isolated network with libvirt:

1. Add and save the following XML in the /tmp/isolated.xml file. If the 192.168.254.0/24 network is already in use elsewhere on your network, you can choose a different network.

<network>

  <name>isolated</name>

  <ip address='192.168.254.1' netmask='255.255.255.0'>

    <dhcp>

      <range start='192.168.254.2' end='192.168.254.254' />

    </dhcp>

  </ip>

</network>

2. Create the network with this command: virsh net-define /tmp/isolated.xml

3. Set the network to autostart with the virsh net-autostart isolated command.

4. Start the network with the virsh net-start isolated command.

5. Using virsh edit name_of_guest, edit the configuration of each guest that uses macvtap for its network connection and add a new <interface> in the <devices> section similar to the following (note the <model type='virtio'/> line is optional to include):

<interface type='network'>

  <source network='isolated'/>

  <model type='virtio'/>

</interface>

6. Shut down, then restart each of these guests.

Since this new network is isolated to only the host and guests, all other communication from the guests will use the macvtap interface.

Listen on Privileged Ports with Nagios Log Servers - How to set it up

This article covers how to configure Nagios Log Servers to listen on privileged ports. Now Nagios Log Server Administrators who would like configure Nagios Log Server to listen on ports below 1024 which are privileged in Linux. This can be useful if you have legacy devices that can only send on specific ports (e.g. syslog on port 514).

Ports below 1024 are privileged on Linux and only allow the root user to listen on them. 

This can be implemented via two solutions:

1. Run Logstash as root

2. Use setcap


To use Use setcap for Listening On Privileged Ports:

Here, you can use the logstash running as the nagios user but this method may be less secure in some environments as it will allow any Java process to listen on privileged ports.

i. The logstash init configuration file requires three lines to be added to the end of it, open the file with the following command:

On Debian | Ubuntu:

$ vi /etc/default/logstash

or

$ sudo /etc/default/logstash

2. Then, Add the following three lines to the end of the file:

echo $(dirname $(find /usr/lib -name libjli.so)) | awk '{print $1}'> /etc/ld.so.conf.d/java.conf

eval "$(which ldconfig)"

setcap 'cap_net_bind_service=+ep' $(readlink -f $(which java))

3. Save the file and close vi.

4. Restart Logstash Service

The logstash service needs to be restarted for these changes to apply:

$ sudo systemctl restart lagstash.service

AWS Instance loses network connectivity - Fix this issue now

This article covers method to fix AWS Instance loses network connectivity error.

Basically, AWS Instance loses network connectivity if the instance has the wrong time set.


To fix Amazon EC2 Windows instance network connectivity issue:

You can create a temporary elastic network interface, and attach the network interface to the Amazon EC2 Windows instance. Then, you can  temporarily connect to the instance and fix the issue.

1. Open the Amazon EC2 console, and then choose Instances from the navigation pane.

2. Select your instance. From the Description tab, note the Subnet ID.

3. Create a new network interface in the same subnet as the instance.

Important: Be sure to select a security group that allows incoming Remote Desktop Protocol (RDP) traffic from your IP address.

4. Attach the new network interface to the instance.

Note: The network interface might take a few minutes to come online. If you connect to the instance using RDP, associate an Elastic IP address with the network interface.

5. Using the new network interface, connect to the instance using RDP.

6. Change the network connection settings in Windows to use DHCP. Or, specify the correct private IP address settings. For instructions, see Configuring a secondary private IPv4 address for your Windows instance.

7. Detach the temporary network interface.

Note: If you've associated an Elastic IP address with the network interface, and no longer need the Elastic IP address, release the Elastic IP address.

Enable WSGI module support in VestaCP - Do it Now

This article covers how to enable WSGI module support in VestaCP for our customers. WSGI is the Web Server Gateway Interface. It is a specification that describes how a web server communicates with web applications, and how web applications can be chained together to process one request. Also, it implements the web server side of the WSGI interface for running Python web applications.


To enable WSGI support on a Debian or Ubuntu on Vesta Control Panel:

1. Install wsgi apache module

$ apt-get install libapache2-mod-wsgi

$ a2enmod wsgi

2. Download wsgi template

$ cd /usr/local/vesta/data/templates/web

$ wget http://c.vestacp.com/0.9.8/ubuntu/wsgi/apache2.tar.gz

$ tar -xzvf apache2.tar.gz

$ rm -f apache2.tar.gz

3. Create new package or set wsgi as apache template in the existing package

4. Add new user and assing him package with wsgi template

5. Add new domain and check the result


Importance of WSGI ?

1. WSGI gives you flexibility. Application developers can swap out web stack components for others. For example, a developer can switch from Green Unicorn to uWSGI without modifying the application or framework that implements WSGI.

2. WSGI servers promote scaling. Serving thousands of requests for dynamic content at once is the domain of WSGI servers, not frameworks. WSGI servers handle processing requests from the web server and deciding how to communicate those requests to an application framework's process. The segregation of responsibilities is important for efficiently scaling web traffic.


Facts about WSGI:

1. what WSGI stands for (Web Server Gateway Inteface)

2. A WSGI container is a separate running process that runs on a different port than your web server

3. Your web server is configured to pass requests to the WSGI container which runs your web application, then pass the response (in the form of HTML) back to the requester.

Configure PostgreSQL on Linux in Vesta control panel - How to do it

This article covers how to install and setup PostgreSQL on Vesta Control panel running RHEL, CentOS, Debian, or Ubuntu server. PostgreSQL is an advanced version of SQL which provides support to different functions of SQL like foreign keys, subqueries, triggers, and different user-defined types and functions.


To set up PostgreSQL on a RHEL or CentOS:

1. Install PostgreSQL packages

yum install postgresql postgresql-server postgresql-contrib phpPgAdmin

* If you have remi installed then don't forget to explicitly enable it.

yum install --enablerepo=remi postgresql postgresql-server postgresql-contrib phpPgAdmin


2. Initialize database cluster

service postgresql initdb


3. Download hba configuration

wget http://c.vestacp.com/0.9.8/rhel/pg_hba.conf -O /var/lib/pgsql/data/pg_hba.conf


4.  Start the server

service postgresql start


5. Set oracle user password

su - postgres

psql -c "ALTER USER postgres WITH PASSWORD 'pgp4sw0rd'"

exit


6. Enable pgsql databases support in vesta.

open /usr/local/vesta/conf/vesta.conf and set DB_SYSTEM to 'mysql,pgsql'


7. Register pg instance in control panel

v-add-database-host pgsql localhost postgres pgp4sw0rd


8. Download phpPgAdmin configuration

wget http://c.vestacp.com/0.9.8/rhel/pga.conf -O /etc/phpPgAdmin/config.inc.php

wget http://c.vestacp.com/0.9.8/rhel/httpd-pga.conf -O /etc/httpd/conf.d/phpPgAdmin.conf


9. Restart web server

service httpd restart


To set up PostgreSQL on a Debian or Ubuntu:

1. Install PostgreSQL packages

apt-get install postgresql postgresql-contrib phppgadmin


2. Download hba configuration

wget http://c.vestacp.com/0.9.8/debian/pg_hba.conf -O /etc/postgresql/*/main/pg_hba.conf


3. Restart the server

service postgresql restart


4. Set oracle user password

su - postgres

psql -c "ALTER USER postgres WITH PASSWORD 'pgp4sw0rd'"

exit


5. Enable pgsql databases support in vesta.

open /usr/local/vesta/conf/vesta.conf and set DB_SYSTEM to 'mysql,pgsql'


6. Register pg instance in control panel

v-add-database-host pgsql localhost postgres pgp4sw0rd


7. Download phpPgAdmin configuration

wget http://c.vestacp.com/0.9.8/debian/pga.conf -O /etc/phppgadmin/config.inc.php

wget http://c.vestacp.com/0.9.8/debian/apache2-pga.conf -O /etc/apache2/conf.d/phppgadmin


8. Restart web server

service apache2 restart

MRTG Reports SNMP_Session Errors in Nagios - Fix it now

This article covers how to fix Nagios issue, MRTG Reports SNMP_Session Errors while using Nagios.

You can see this error when running MRTG at the command line such as:

LANG=C LC_ALL=C /usr/bin/mrtg /etc/mrtg/mrtg.cfg --lock-file /var/lock/mrtg/mrtg_l --confcache-file /var/lib/mrtg/mrtg.ok


When this Nagios error happens, you will receive error similar to this:

Subroutine SNMP_Session::pack_sockaddr_in6 redefined at /usr/local/share/perl5/Exporter.pm line 66.

at /usr/bin/../lib/mrtg2/SNMP_Session.pm line 149.

Core Configuration Manager Displaying Issues in Nagios XI

This article covers how to resolve the issue with Nagios XI that stops displaying the core configuration manager or the components inside the core configuration manager.

If this is the case, When using Core Configuration Manager (CCM), the interface does not work as expected, it does not appear to display correctly and generally it feels like there is a bug. This issues is related to the web browsers implementation of JavaScript. If possible, use a browser that more closely implements the ECMAScript Language Specification.

A quick way to see if this is the problem is to see if you experience the same issue using another web browser.


To fix this Nagios configuration Problem:

1. In the event of the the Core Config Manager not visible or components missing from the page, this generally relates to a proxy and the following steps may resolve this issue:

pear config-set http_proxy http://proxy:port

2. Make sure to change proxy:port to match your proxy server, example:

pear config-set http_proxy http://192.168.44.20:8080

3. Then execute the following:

pear install HTML_Template_IT

 After performing these steps go back to CCM and see if it works.

Enable Leech Protection in cPanel - Do it with ease

This article covers step by step process to configure Leech Protection in cPanel. Basically, Leech Protection is an easy to configure security feature by cPanel. Leech Protect is a security feature offered within cPanel that allows you to detect unusual levels of activity in password-restricted directories on your website.


Importance of Leech Protection in cPanel:

1. Leeching is when users publicly post their username and password, unauthorized visitors can use those credentials to access secure areas of your website.

2. With the Leech Protection feature in cPanel, you can limit the number of times a user can access a secure area of your website within a two-hour period. 

3. After you set the maximum number of logins within a two-hour period, the system redirects or suspends users who exceed it. 

4. This is useful, also, say someone is trying to login to restricted areas of your website by guessing combinations of usernames and passwords.


To Enable Leech Protection in cPanel:

1. Click Leech Protection under Security in cPanel.

2. Click on the name of the directory that you want to protect. You can click the folder icon next to the folder name to open the folder.

3. Under Set up Leech Protection, enter the number of logins allowed per username in a two-hour period.

4. To redirect users who exceeded the maximum number of logins within a two-hour period, enter a URL to which you wish to redirect them.

5. To receive an email alert when an account is compromised, select the Send Email Alert to option and enter the email address in the text field.

6. To disable compromised accounts, check the Disable Compromised Accounts option.

7. When ready, click Enable.

Add user in VestaCP - How to do it

This article covers how to add a user in VestaCP. Vesta control panel (VestaCP) is an open source hosting control panel, which can be used to manage multiple websites, creat and manage email accounts, FTP accounts, and MySQL databases, manage DNS records and so on.


To Add / Edit User in VestaCP:

1. First, click the USER tab on top, then click the green coloured “+” to add a new user.

2. Fill in the details for the new user. Click “Add” when you’ve completed the info.

3. This message will pop up if all the info are filled in correctly.

Now, you will see 2 users to choose from. Access the newly created user by clicking on “Login as (username)”. 

Each user can manage their own web, DNS, mail and database, etc.

You can also perform edit, deletion or suspension of user accounts using the buttons shown in the red box.


To uninstall Vesta Control panel:

1. Stop vesta service. service vesta stop.

Remove vesta packages and software repository. RHEL/CentOS: yum remove vesta* rm -f /etc/yum.repos.d/vesta.repo. Debian/Ubuntu: apt-get remove vesta* rm -f /etc/apt/sources.list.d/vesta.list.

2. Delete data directory and cron.

Configuration verification failed in Nagios - Fix it Now

This article covers fixes to this Nagios Configuration failed problem.

When you click the Show Errors link a message is shown that indicates the problem in the config files along with a line number for the config file. However when looking at the config file in a text editor, the line number does not appear to relate to the problem.


The Apply Configuration process is as follows:

i. New config files are temporarily written to disk

ii. Nagios verifies the config files are valid

iii. Temporary config files are made permanent

iv. Nagios service is restarted


When the verification step fails, the temporary files are discarded. Hence when you go to look at the file to look at the line number it references it is not valid as the temporary files now longer exist.


To fix Nagios Configuration Problem:

1. Open CCM

2. Tools > Config File Management

3. Click the Delete Files button

4. It will say "Successfully deleted all Host / Service Config Files"

5. Click the Write Configs Button

5. It will show an output of all the files it creates, in large deployments this step may take a long time .

6. Click the Verify Files button

7. The output should end with the error message you have experienced previously.

At this point, you can open an SSH session to your Nagios XI server and open the file in a text editor to investigate the problem.

Add Cron Job in VestaCP - How to do it

This article covers steps to add cron jobs in VestaCP. Using VestaCP, you can add mail accounts, databases, Cron jobs, and a whole lot more with just a few clicks. Cron jobs help automize certain commands that need to run regularly, this ensures everything runs smoothly. 


Vesta Control Panel (VestaCP) is an open source hosting control panel that can manage multiple websites, create and manage email accounts, create and manage FTP accounts. Also, manage MySQL database and DNS records.


How to setup a CRON job using VestaCP ?

1. Move to the “CRON” tab then mouse over the plus symbol and click on “Add Cron Job”;

2. Enter the command you would like to execute and make sure to include the necessary privileges such as sudo if your command requires it. Use the frequency generator on the right side of the options to set how often you would like the command to execute.

3. Finally, click Generate to confirm the frequency and then click Add to finish adding the Cron job.

Note: Before adding a cron job be sure to test it out first to ensure it works.

Vital Command Line commands for Linux Admins with examples

This article covers a few Vital Command Line for Linux Admins. The Linux command line is a text interface to your computer. Allows users to execute commands by manually typing at the terminal, or has the ability to automatically execute commands which were programmed in “Shell Scripts”.


Common commands in Linux:

1. su command

The su command exists on most unix-like systems. It lets you run a command as another user, provided you know that user's password. When run with no user specified, su will default to the root account. The command to run must be passed using the -c option.


2. which command

which command in Linux is a command which is used to locate the executable file associated with the given command by searching it in the path environment variable. It has 3 return status as follows: 0 : If all specified commands are found and executable.


3. Who am I command line?

whoami command is used both in Unix Operating System and as well as in Windows Operating System. It is basically the concatenation of the strings “who”,”am”,”i” as whoami. It displays the username of the current user when this command is invoked. It is similar as running the id command with the options -un.


4. What does W command do in Linux?

w is a command-line utility that displays information about currently logged in users and what each user is doing. It also gives information about how long the system has been running, the current time, and the system load average.



Facts about the demand in Linux admins?

1. The job prospects for Linux System Administrator are favorable. 

2. According to the US Bureau of Labor Statistics (BLS), there is expected to be a growth of 6 percent from 2016 to 2026. 

3. Candidates who have a firm hold on cloud computing and other latest technologies have bright chances.

Install NDOUtils in Ubuntu - Do it now

This article covers how to install NDOUtils in Ubuntu. NDOUtils is basically the Database Output for Nagios Core. 

NDOUtils stands for Nagios Data Output Utilities which is an addon that allows you to move status and event information from Nagios to a MySQL Database for later retrieval and processing.


NDOUtils consists of the following parts:

1. The NDOMOD event broker module. This module is intended to be loaded by the Nagios process at runtime. Its only role is to dump all events and data from Nagios to a TCP socket or a regular file or Unix domain socket on the local filesystem somewhere. If you want Realtime transfer of data to MySQL, dump the data to a TCP or Unix domain socket. If you want delayed transfer of data into MySQL (i.e. you need to transfer the data to another host first), dump the data to a regular file.


2. The NDO2DB daemon. This standalone daemon reads input (that was produced by the NDOMOD broker module) from a TCP or Unix domain socket, parses that data, and then dumps it into one or more MySQL databases. The daemon is capable of handling multiple client connections simultaneously, so you can have multiple instances of the NDOMOD module writing to the same TCP or Unix domain socket at the same time.


3. The FILE2SOCK utility. This simple utility reads data from a standard file and dumps it to either a TCP or a Unix domain socket. This is useful if you are having the NDOMOD module write to a standard file that you later want to send to the NDO2DB daemon. If the module and the daemon are running on different machines, you can periodically use SSH to transfer the file from the monitoring machine to the machine running the NDO2DB daemon, and then use the FILE2SOCK utility to send the contents of that file to the TCP socket or Unix domain socket that the NDO2DB daemon is reading.


4. The LOG2NDO utility. This utility is used for importing historical log archives from NetSaint and Nagios and sending them to the NDO2DB daemon. It takes a single log file as its input and can output data to either a TCP socket, a Unix domain socket or standard output.


To compile the NDO broker module, NDO2DB daemon, and additional utilities:

1. Run the commands below;

./configure

make all

2. If the configure script is unable to locate your MySQL development libraries, you may need to help it out by using the --with-mysql-lib option. 

Here's an example:

./configure --with-mysql-lib=/usr/lib/mysql


NDOUTILS Tuning Kernel Parameters includes:

NDOUTILS uses a single message queue to communicate between the broker module and the NDO2DB daemon. Depending on the operating system, there may be parameters that need to be tuned in order for this communication to work correctly.

1. kernel.msgmax is the maximum size of a single message in a message queue

2. kernel.msgmni is the maximum number of messages allowed in any one message queue

3. kernel.msgmnb is the total number of bytes allow in all messages in any one message queue


How to initialize the Database for NDOUtils installation:

Before you start using NDOUtils, you should create the database where you will be storing all Nagios related information.

Note: Only MySQL Databases are supported!

i. Create a database for storing the data (e.g. nagios)

ii. Create a username/password that has at least the following privileges for the database:

SELECT, INSERT, UPDATE, DELETE

iii. Run the DB installation script in the db/ subdirectory of the NDO distribution to create the necessary tables in the database.

cd db

./installdb

iv. Make sure the database name, prefix, and username/password you just created and setup match the variable specified in the NDO2DB config file.

Domain Password Policy in the Active Directory - How to Set it up

This article covers an effective method to configure Domain Password Policy in the Active Directory which ensures a high level of security for user accounts. 

Group policy with password policy should be assigned to domain level, not OU, you can have multiple GPO's with password policies in domain level however only one policy will be applied to all users in their priority.


Basic Password Policy Settings on Windows:

Let's consider all available Windows password settings. 

There are six password settings in GPO:

1. Enforce password history – determines the number of old passwords stored in AD, thus preventing a user from using an old password.

However, the domain admin or user who has been delegated password reset permissions in AD can manually set the old password for the account;


2. Maximum password age – sets the password expiration in days. After the password expires, Windows will ask the user to change the password. Ensures the regularity of password changes by users;

You can find out when a specific user’s password expires using the PowerShell: 

Get-ADUser -Identity j.werder -Properties msDS-UserPasswordExpiryTimeComputed | select-object @{Name="ExpirationDate";Expression= {[datetime]::FromFileTime($_."msDS-UserPasswordExpiryTimeComputed") }}.


3. Minimum password length – it is recommended that passwords should contain at least 8 symbols (if you specify 0 here, the password is not required);


4. Minimum password age – sets how often users can change their passwords. This setting won’t allow the user to change the password too often to get back to an old password they like by removing them from the Password History after the password has been changed several times in a row. As a rule, it is worth to set 1 day here in order users can change a password themselves if it gets compromised (otherwise an administrator will have to change it);


5. Password must meet complexity requirements – if the policy is enabled, a user cannot use the account name in a password (not more than 2 symbols of a username or Firstname in a row), also 3 types of symbols must be used in the password: numbers (0–9), uppercase letters, lowercase letters and special characters ($, #, %, etc.). Also, to prevent using weak passwords (from the password dictionary), it is recommended to regularly audit user passwords in the AD domain;


6. Store passwords using reversible encryption – user passwords are stored encrypted in the AD database, but in some cases you have to grant access to user passwords to some apps. If this policy setting is enabled, passwords are less protected (almost plain text). It is not secure (an attacker can get access to the password database if the DC is compromised; an read-only domain controllers (RODC) can be used as one of the protection measures).

Features Of SQL Server 2019 - More Insight

This article covers the main features of SQL Server 2019. SQL Server 2019 features Data virtualization and SQL Server 2019 Big Data Clusters.

With Read, write, and process big data from Transact-SQL or Spark. Easily combine and analyze high-value relational data with high-volume big data. Query external data sources. Store big data in HDFS managed by SQL Server.


The Main Features of SQL Server 2019 includes:

1. Intelligent Query Processing Enhancements.

2. Accelerated Database Recovery (ADR).

3. AlwaysEncrypted with secure enclaves.

4. Memory-optimized Tempdb metadata.

5. Query Store custom capture policies.

6. Verbose truncation warnings.

7. Resumable index build.

8. Data virtualization with Polybase.


How do I start SQL Server?

In SQL Server Configuration Manager, in the left pane, click SQL Server Services. In the results pane, right-click SQL Server (MSSQLServer) or a named instance, and then click Start, Stop, Pause, Resume, or Restart.


To uninstall SQL Server from Windows 10, Windows Server 2016, Windows Server 2019, and greater, follow these steps:

1. To begin the removal process navigate to Settings from the Start menu and then choose Apps.

2. Search for sql in the search box.

3. Select Microsoft SQL Server (Version) (Bit).

4. Select Uninstall.

Remote session disconnected because there are no remote desktop license servers

This article covers how to resolve the error 'remote session disconnected because there are no remote desktop license servers'.


If the problems tend to be associated with the following user messages:

i. The remote session was disconnected because there are no Remote Desktop client access licenses available for this computer.

ii. The remote session was disconnected because there are no Remote Desktop License Servers available to provide a license.


Then, configure the RD Licensing service by following the steps below:

1. Open Server Manager and navigate to Remote Desktop Services.

2. On Deployment Overview, select Tasks, and then select Edit Deployment Properties.

3. Select RD Licensing, then select the appropriate licensing mode for your deployment (Per Device or Per User).

4. Enter the fully qualified domain name (FQDN) of your RD License server, and then select Add.

5. If you have more than one RD License server, repeat step 4 for each server.


If the RD License Diagnoser lists other problems, such as "The RDP protocol component X.224 detected an error in the protocol stream and has disconnected the client," there may be a problem that affects the license certificates. Such problems tend to be associated with user messages, such as the following:

Because of a security error, the client could not connect to the Terminal server. After making sure that you are signed in to the network, try connecting to the server again.

In this case, refresh the X509 Certificate registry keys by following the steps given below:

To resolve this problem, back up and then remove the X509 Certificate registry keys, restart the computer, and then reactivate the RD Licensing server.

1. Open the Registry Editory and navigate to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Terminal Server\RCM.

2. On the Registry menu, select Export Registry File.

3. Enter exported- Certificate into the File name box, then select Save.

4. Right-click each of the following values, select Delete, and then select Yes to verify the deletion:

i. Certificate

ii. X509 Certificate

iii. X509 Certificate ID

iv. X509 Certificate2

5. Exit the Registry Editor and restart the RDSH server.


To fix Remote session was disconnected because there are no Remote Desktop client access licenses:

1. You need to delete the following registry key:HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSLicensing.

2. If it doesn't work and you get the following error message: "The remote computer disconnected the session because of an error in the licensing protocol";

3. Then all you need to do is Right-Click on the Remote Desktop Connection icon and select "Run as Administrator".

Zabbix error Invalid entry when restarting zabbix-agent - Fix it now

This article covers how to fix Zabbix error, an invalid entry when restarting the zabbix-agent which may appear when the configuration file is placed in the wrong path.

The problem is with the configurations:

You are putting the .my.cnf file at the wrong place.

Zabix agent configuration file has prescribed format and parameters defined. 

If you may use Include option to load additional config files, be sure it follows the same format.

So, in your case the .my.cnf file fails to load when you put it under /etc/zabbix/zabbix_agentd.d.

You are missing the part to configure userparameter_mysql.conf file properly.


To resolve this Zabbix error:

1. Move the .my.cnf file from /etc/zabbix/zabbix_agentd.d directory to /etc/zabbix. 

And also remove any Include entry to refere to the .my.cnf file (if there is any). 

The content of the file may look like this:

[mysqld]

user=username

password=userpass


[mysqladmin]

user=username

password=userpass

2. Please make sure that the user listed here exists and have the necessary permissions in mysql.

3. Edit /etc/zabbix/zabbix_agentd.d/userparameter_mysql.conf file: You need to replace HOME=/var/lib/zabbix with HOME=/etc/zabbix to point to the right file (should appear three times) as mentioned in the first line of the file.

4. Finally restart the agent: $ service zabbix-agent restart

Zabbix tries to connect to the wrong database - Fix it now

This article covers how to fix Zabbix error, when Zabbix tries to connect to the wrong database.

Basically, when Zabbix tries to connect to the wrong database, we can simply resolve it by restarting the service.

Zabbix is an open-source monitoring software tool for diverse IT components, including networks, servers, virtual machines (VMs) and cloud services. Zabbix provides monitoring metrics, among others network utilization, CPU load and disk space consumption.


How do I fix Zabbix server is not running?

Zabbix server error due to problems with the firewall.

1. As a root user, we check and confirm whether the firewall is allowing connection to Zabbix Server port which is 10051. 

2. If not, then we add the following rule in the configuration file /etc/sysconfig/iptables. 

3. Finally, restart the service in order to fix the error.


How to turn off Zabbix server?

To Stop the Service:

1. Use PuTTY to log in as user robomanager to the server where Zabbix is installed.

2. Run the following command to switch to user root: su - root.

3. Run the following command to stop the Zabbix service: systemctl stop zabbix_server.service.


To monitor my zabbix port:

1. In order to test your configuration, access the Monitoring menu and click on the Latest data option. 

2. Use the filter configuration to select the desired hostname. 

3. Click on the Apply button. 

4. You should be able to see the results of your TCP port monitoring using Zabbix.


Where is zabbix config located?

The Zabbix installation process created an Apache configuration file that contains these settings. 

It is located in the directory /etc/zabbix and is loaded automatically by Apache.


How does Zabbix proxy work?

Zabbix proxy is a process that may collect monitoring data from one or more monitored devices and send the information to the Zabbix server, essentially working on behalf of the server. 

All collected data is buffered locally and then transferred to the Zabbix server the proxy belongs to.


Zabbix Server supported DATABASE ENGINE:

Zabbix Server and Proxy support five database engines:

1. IBM DB2

2. MySQL

3. Oracle

4. PostgreSQL

5. SQLite

Libvirt error Unable to resolve address name or service not known

This article covers tips to fix 'Libvirt error: Unable to resolve address: name or service not known'. 

QEMU guest migration fails and this error message appears:

# virsh migrate qemu qemu+tcp://192.168.122.12/system error: Unable to resolve address name_of_host service '49155': Name or service not known

Note that the address used for migration data cannot be automatically determined from the address used for connecting to destination libvirtd (for example, from qemu+tcp://192.168.122.12/system). 

This is because to communicate with the destination libvirtd, the source libvirtd may need to use network infrastructure different from that which virsh (possibly running on a separate machine) requires.


To fix Libvirt error: Unable to resolve address: name or service not known:

The best solution is to configure DNS correctly so that all hosts involved in migration are able to resolve all host names.

If DNS cannot be configured to do this, a list of every host used for migration can be added manually to the /etc/hosts file on each of the hosts. 

However, it is difficult to keep such lists consistent in a dynamic environment.

i. If the host names cannot be made resolvable by any means, virsh migrate supports specifying the migration host:

# virsh migrate qemu qemu+tcp://192.168.122.12/system tcp://192.168.122.12

Destination libvirtd will take the tcp://192.168.122.12 URI and append an automatically generated port number. 

ii. If this is not desirable (because of firewall configuration, for example), the port number can be specified in this command:

# virsh migrate qemu qemu+tcp://192.168.122.12/system tcp://192.168.122.12:12345

iii. Another option is to use tunnelled migration. Tunnelled migration does not create a separate connection for migration data, but instead tunnels the data through the connection used for communication with destination libvirtd (for example, qemu+tcp://192.168.122.12/system):

# virsh migrate qemu qemu+tcp://192.168.122.12/system --p2p --tunnelled

Boot a guest using PXE - Do it now

This article covers how to boot a guest using PXE. PXE booting is supported for Guest Operating Systems that are listed in the VMware Guest Operating System Compatibility list and whose operating system vendor supports PXE booting of the operating system.

The virtual machine must meet the following requirements:

1. Have a virtual disk without operating system software and with enough free disk space to store the intended system software.

2. Have a network adapter connected to the network where the PXE server resides.


A virtual machine is not complete until you install the guest operating system and VMware Tools. Installing a guest operating system in your virtual machine is essentially the same as installing it in a physical computer.


To use PXE with Virtual Machines:

You can start a virtual machine from a network device and remotely install a guest operating system using a Preboot Execution Environment (PXE). 

You do not need the operating system installation media. When you turn on the virtual machine, the virtual machine detects the PXE server.


To Install a Guest Operating System from Media:

You can install a guest operating system from a CD-ROM or from an ISO image. Installing from an ISO image is typically faster and more convenient than a CD-ROM installation. 


To Upload ISO Image Installation Media for a Guest Operating System:

You can upload an ISO image file to a datastore from your local computer. You can do this when a virtual machine, host, or cluster does not have access to a datastore or to a shared datastore that has the guest operating system installation media that you require.


How to Use a private libvirt network ?

1. Boot a guest virtual machine using libvirt with PXE booting enabled. You can use the virt-install command to create/install a new virtual machine using PXE:

virt-install --pxe --network network=default --prompt

2. Alternatively, ensure that the guest network is configured to use your private libvirt network, and that the XML guest configuration file has a <boot dev='network'/> element inside the <os> element, as shown in the following example:

<os>

   <type arch='x86_64' machine='pc-i440fx-rhel7.0.0'>hvm</type>

   <boot dev='network'/>

   <boot dev='hd'/>

</os>

3. Also ensure that the guest virtual machine is connected to the private network:

<interface type='network'>

   <mac address='52:54:00:66:79:14'/>

   <source network='default'/>

   <target dev='vnet0'/>

   <alias name='net0'/>

   <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>

</interface>

Apply Configuration Fails in Nagios Server - Resolve this issue now

This article covers solutions to when Apply Configuration Fails in Nagios Server. This issue happens when the Nagios XI server is unable to resolve the "localhost".

The error message will look like this:

Apply Configuration fails with the following error:

Backend login to the Core Config Manager failed.
An error occurred while attempting to apply your configuration to Nagios Core.
Monitoring engine configuration files have been rolled back to their last known good checkpoint.


To resolve this Nagios Problem:

1. Edit your /etc/hosts file and make sure there are localhost entries. For example:

127.0.0.1    localhost.localdomain    localhost.localdomain    localhost4    localhost4.localdomain4    localhost    xi-c6x-x64

::1    localhost.localdomain    localhost.localdomain    localhost6    localhost6.localdomain6    localhost    xi-c6x-x64

2. After making these changes try and "Apply Configuration" from Core Configuration Manager and your problem should be resolved.

Apply Configuration never completes in Nagios - Fix this issue now

This article covers methods to resolve the issue, Apply Configuration never completes in Nagios. The backend components in Nagios XI require high level privileges, these are accommodated for in sudoers entries.

This allows for high level commands to be executed by scripts without requiring user input. If these entries are missing then they can cause unexpected results.


Sometimes when creating a large amount of objects the apply configuration process is taking longer than expected and PHP may time out or exceed limits.

These are defined in the php.ini file, The location of the php.inifile differs depending on your operating system / version. The following command will determine the location:

find /etc -name php.ini

If there are multiple results then the one in the apache directory is the one that needs changing.

Edit /etc/php.ini and increase these values:

max_execution_time = 60

max_input_time = 60

memory_limit = 256M

 

After making these changes you'll need to restart the Apache service using one of the commands below:

RHEL 7 | CentOS 7 | Oracle Linux 7

$ systemctl restart httpd.service

Debian | Ubuntu 16/18

$ systemctl restart apache2.service

Port 113 IDENT Requests - How to Disable it on Nagios

This article covers how to disable Port 113 IDENT Requests on Nagios. 

You are seeing port 113 return requests either from your Nagios XI server (when submitting NSCA passive results) to the originating host OR you are seeing port 113 return requests when checking NRPE services).

You will see this behavior on your firewall logs as you will most likely not have a firewall rule for port 113.

This is usually because you are running an NRPE check through XINETD with USERID included on the log_on_success or log_on_failure options in your remote hosts /etc/xinetd.d/nrpe file.

OR this could be because you are submitting passive results to the XI server through NSCA (which is running under XINETD) /etc/xinetd.d/nsca with the same options as above.


To disable Port 113 IDENT Requests:

1. Then remove the USERID option from the log_on_failure AND log_on_success to stop the IDENT from occurring. The file you need to change depends on:

i. NRPE on remote host

/etc/xinetd.d/nrpe

ii. NSCA on Nagios XI server

/etc/xinetd.d/nsca

2. After making the changes you need to restart the xinetd service using one of the commands below:

RHEL 7+ | CentOS 7+ | Oracle Linux 7+ | Debian | Ubuntu 16/18/20

$ systemctl restart xinetd.service


What is filter ident port 113?

Filter IDENT(port 113) (Enabled) IDENT allows hosts to query the device, and thus discover information about the host.

On the VPN Passthrough screen, you can configure the router to transparently pass IPSec, PPPoE, and PPTP traffic from internal hosts to external resources.

Configure software RAID on Linux using MDADM - Do it now

This article covers how to Configure software RAID on Linux using MDADM.


To Install a Software Raid Management Tool:

To install mdadm, run the installation command:

1. For CentOS/Red Hat (yum/dnf is used): $ yum install mdadm

2. For Ubuntu/Debian: $ apt-get install mdadm

3. SUSE: $ sudo zypper install mdadm

4. Arch Linux: $ sudo pacman -S mdadm


Terms related to Integrity of a RAID Array:

1. Version – the metadata version

2. Creation Time – the date and time of RAID creation

3. Raid Level – the level of a RAID array

4. Array Size – the size of the RAID disk space

5. Used Dev Size – the space size used by devices

6. Raid Device – the number of disks in the RAID

7. Total Devices – is the number of disks added to the RAID

8. State – is the current state (clean — it is OK)

9. Active Devices – number of active disks in the RAID

10. Working Devises – number of working disks in the RAID

11. Failed Devices – number of failed devices in the RAID

12. Spare Devices – number of spare disks in the RAID

13. Consistency Policy – is the parameter that sets the synchronization type after a failure, rsync is a full synchronization after RAID array recovery (bitmap, journal, ppl modes are available)

14. UUID – raid array identifier


To Recovering from a Disk Failure in RAID, Disk Replacement:

If one of the disks in a RAID failed or damaged, you may replace it with another one. First of all, find out if the disc is damaged and needs to be replaced.

# cat /proc/mdstat


To Add or Remove Disks to Software RAID on Linux:

1. If you need to remove the previously created mdadm RAID device, unmount it:

# umount /backup

2. Then run this command:

# mdadm -S /dev/md0

mdadm: stopped /dev/md0

3. After destroying the RAID array, it won’t detected as a separate disk device:

# mdadm -S /dev/md0

mdadm: error opening /dev/md0: No such file or directory

4. You can scan all connected drives and re-create a previously removed (failed) RAID device according to the metadata on physical drives. Run the following command:

# mdadm --assemble —scan


About Mdmonitor: RAID State Monitoring & Email Notifications:

The mdmonitor daemon can be used to monitor the status of the RAID. 

1. First, you must create the /etc/mdadm.conf file containing the current array configuration:

# mdadm –detail –scan > /etc/mdadm.conf

The mdadm.conf file is not created automatically. You must create and update it manually.

2. Add to the end of /etc/mdadm.conf the administrator email address to which you want to send notifications in case of any RAID problems:

MAILADDR raidadmin@woshub.com

3. Then restart mdmonitor service using systemctl:

# systemctl restart mdmonitor

Then the system will notify you by e-mail if there are any mdadm errors or faulty disks.

Prometheus Distributed Monitoring System - A brief review

This article covers an overview of what Prometheus Distributed Monitoring System is and how it works.

Prometheus is an open-source systems monitoring and alerting toolkit with an active ecosystem.


Why is Prometheus used?

Prometheus is an open-source monitoring software that is very popular in the industry. Prometheus is easy to customize, and produces metrics without impacting application performance.

Along with this, Prometheus monitoring can be used to provide clarity into systems and how to run them.


What is Prometheus monitoring used for?

Prometheus is a free software application used for event monitoring and alerting. It records real-time metrics in a time series database (allowing for high dimensionality) built using a HTTP pull model, with flexible queries and real-time alerting.


What is AWS Prometheus?

Amazon Managed Service for Prometheus (AMP) is a Prometheus-compatible monitoring service that makes it easy to monitor containerized applications at scale.

AMP automatically scales as your workloads grow or shrink, and is integrated with AWS security services to enable fast and secure access to data.


What metrics does Prometheus collect?

At this moment, for Prometheus, all metrics are time-series data. The Prometheus client libraries are the ones in charge of aggregating metrics data, like count or sum. Usually, these client libraries—like the Go library from the graphic above—have four types of metrics: counter, gauge, history, and summary.


What is the difference between Grafana and Prometheus?

Grafana and Prometheus, both help us in tackling issues related to complex data in a simplified manner. 

Grafana is an open-source visualization software, which helps the users to understand the complex data with the help of data metrics.

Prometheus is an open-source event monitoring and alerting tool.


How does Prometheus monitoring work?

Prometheus scrapes metrics from instrumented jobs, either directly or via an intermediary push gateway for short-lived jobs. 

It stores all scraped samples locally and runs rules over this data to either aggregate and record new time series from existing data or generate alerts.

Linux vs Windows file systems - Which do you prefer

This article covers difference between Linux and Windows file system. Basically, both Windows and Linux use file systems to store data in an organized manner. 


Advantages of using Linux:

1. Linux facilitates with powerful support for networking. 

2. The client-server systems can be easily set to a Linux system. 

3. It provides various command-line tools such as ssh, ip, mail, telnet, and more for connectivity with the other systems and servers. 

4. Tasks such as network backup are much faster than others.


Disadvantages of Linux OS:

1. No single way of packaging software.

2. No standard desktop environment.

3. Poor support for games.

4. Desktop software is still rare.


Why Linux is not popular as Windows?

The main reason why Linux is not popular on the desktop is that it doesn't have “the one” OS for the desktop as does Microsoft with its Windows and Apple with its macOS. 

If Linux had only one operating system, then the scenario would be totally different today. Linux kernel has some 27.8 million lines of code.


Linux a good operating system and widely considered one of the most reliable, stable, and secure operating systems too. In fact, many software developers choose Linux as their preferred OS for their projects. 

It is important, however, to point out that the term "Linux" only really applies to the core kernel of the OS.


Most Stable Linux Distros:

1. Debian. Suitable for: Beginners.

2. Fedora. Suitable for: Software Developers, Students.

3. Linux Mint. Suitable for: Professionals, Developers, Students.

4. Manjaro. Suitable for: Beginners.

5. openSUSE. Suitable for: Beginners and advanced users.

6. Tails. Suitable for: Security and privacy.

7. Ubuntu.

8. Zorin OS.


Reasons Why Linux Is Better Than Windows:

1. Total cost of ownership. The most obvious advantage is that Linux is free whereas Windows is not.

2. Beginner friendly and easy to use. Windows OS is one of the simplest desktop OS available today.

3. Reliability. Linux is more reliable when compared to Windows.

4. Hardware.

5. Software.

6. Security.

7. Freedom.

8. Annoying crashes and reboots.


Can Linux and Windows share files?

The easiest and most reliable way to share files between a Linux and Windows computer on the same local area network is to use the Samba file sharing protocol. 

All modern versions of Windows come with Samba installed, and Samba is installed by default on most distributions of Linux.


Can Linux read NTFS drives?

Linux can read NTFS drives using the old NTFS filesystem that comes with the kernel, assuming that the person that compiled the kernel didn't choose to disable it. 

To add write access, it's more reliable to use the FUSE ntfs-3g driver, which is included in most distributions.


For typical everyday Linux use, there's absolutely nothing tricky or technical you need to learn. Running a Linux server, of course, is another matter just as running a Windows server is. 

But for typical use on the desktop, if you've already learned one operating system, Linux should not be difficult.


Is Linux a good career choice?

A Linux Administrator job can definitely be something you can start your career with. 

It is basically the first step to start working in the Linux industry. 

Literally every company nowadays works on Linux. So yes, you are good to go.

Set custom php.ini in FastCGI - How to set it up

This article covers how to set custom php.ini in FastCGI. Basically, FastCGI is the best to manage resources for a high traffic site in shared servers.


To create Custom php.ini with PHP5 under FastCGI:

1. First open .htaccess for the account in question and add the following lines to the bottom of the file:

AddHandler php5-fastcgi .php

Action php5-fastcgi /cgi-bin/php5.fcgi

2. Next, you'll need to source your main server php.ini which is located in /usr/local/lib/. Also note that it needs to have the correct ownership so we'll take care of that too:

cd /home/user/public_html/cgi-bin/

cp -a /usr/local/lib/php.ini .

chown user:user php.ini

3. Next we need to create the wrapper script. Create a file in your current directory (cgi-bin) called php5.fcgi as defined above and add the following:

#!/bin/sh

export PHP_FCGI_CHILDREN=1

export PHP_FCGI_MAX_REQUESTS=10

exec /usr/local/cpanel/cgi-sys/php5

4. Finally, make sure the ownership and permissions are correct on this file:

chown user:user php5.fcgi && chmod 0755 php5.fcgi

Troubleshoot server down issues - How to do it

This article covers how to troubleshoot server down issues. Basically, network issues might cause the Datacenter to go down. This can lead us into some unlucky instances. Troubleshooting server down issues is never an easy task. Whether you have a small home network, or a super connection of thousands of computers, there are meticulous steps you need to take to get your server back up running.


Steps to take when troubleshooting server down issues:

1. ANALYZE YOUR NETWORK INFRASTRUCTURE

You will have a better chance at troubleshooting network problems if you first figure out where everything is connected in the infrastructure.

2. STUDY YOUR NETWORK

If you don't have an infrastructure design to go by, you will have to learn your network’s layout when analyzing your connectivity. Several tools can help you to map out the entire network infrastructure. Tools such as IPCONFIG can aid in finding the problem.

3. CONNECTION IS DOWN

From the network troubleshooting application, find out from the OSI model if all the seven layers are working correctly. Usually, if the first layer doesn’t work the whole connection will be down. Check whether the network cable is plugged in.

4. NO IP ADDRESS

Your server could be down just because of unknown IP address settings. Anon IP address such as 0.0.0.0 or an automatic one that starts with 169.254 will typically result in server down problems. You will need to obtain a valid IP address before you can get your server back up. 

5. NO DNS SERVERS

Without DNS servers configured on your network, all communication will only be possible through an IP address. A server down issue, in this case, might be a broken a line between the router and the internet. 

6. NO DEFAULT GATEWAY

Your servers could be down because there is no default gateway IP address. This breaks the communication between the subnet and the local area network. You will still be able to work as usual on your local servers. 

7. MISCONFIGURED IP SUBNET MASK

A misconfigured subnet mask IP can impede server communication. You can manually configure this IP subnet mask or work with the DHCP server to identify the source if there is a misconfiguration.

Create custom php ini in Litespeed Webserver - How to do it

This article covers how to Create custom php.ini in Litespeed Webserver. Basically, compared to the Apache web server, the Litespeed web server configuration may feel a bit complicated. In hosting environment with cPanel servers, it is necessary to edit the PHP variables for each domain or customer and this can be done using by creating a custom php.ini for each user's home directory. So the clients can change the PHP values according to their requirements.  

There should some steps need to be done on Litespeed admin panel on cPanel/WHM to enable custom php.ini and you can follow the below steps to enable it.


To Create Custom Php.Ini In A Litespeed Webserver:

1. Login into WHM.

2. Select Litespeed Web Server

3. Litespeed Configuration > Admin Console > Configuration > Server > External App > lsphp5

4. Under Environment section >> add “PHPRC=$VH_ROOT”

5. Under “suEXEC User ” section >> add the account username for which custom php.ini has to be enabled.

6. Under “suEXEC Group ” section >> add the group name of the same account.

7. Click save and return to Main >> Litespeed Web server

8. Under Quick Configuration of PHP suEXEC settings,>> Set Enable PHP suExec to yes.

9. After that put custom php.ini in the user’s home directory and check it using a phpinfo page.

Install NDOUtils in CentOS RHEL and fix related error

This article covers how to install NDOUtils in CentOS. NDOUtils is an addon for Nagios Core that allows you to export current and historical data from one or more Nagios Core instances to a MySQL database. NDOUtils is included with Nagios XI. A source in Nagios Network Analyzer is the data collector. Outside of Nagios Network Analyzer a source is the location where data is originating from.


NDOUtils uses the kernel message queue for transferring the data from Nagios to NDOUtils. We are going to increase the default values the Kernel boots with to ensure it operates optimally.

1. Downloading NDOUtils Source

cd /tmp

wget -O ndoutils.tar.gz https://github.com/NagiosEnterprises/ndoutils/releases/download/ndoutils-2.1.3/ndoutils-2.1.3.tar.gz

tar xzf ndoutils.tar.gz

2. Compile NDOUtils

cd /tmp/ndoutils-2.1.3/

./configure

make all

3. Install Binaries

This step installs the binary files.

make install

4. Initialize Database

This prepares the database for NDOUtils.

cd db/

./installdb -u 'ndoutils' -p 'ndoutils_password' -h 'localhost' -d nagios

cd .. 

SSH Servers Clients and Keys - More about it now

This article covers important information about SSH Servers, Clients and Keys. Use SSH keys for authentication when you are connecting to your server, or even between your servers.

They can greatly simplify and increase the security of your login process. 

When keys are implemented correctly they provide a secure, fast, and easy way of accessing your cloud server.


Turn off password authentication Linux:

With SSH key authentication configured and tested, you can disable password authentication for SSH all together to prevent brute-forcing. When logged in to your cloud server.

1. Open the SSH configuration file with the following command.

$ sudo nano /etc/ssh/sshd_config

2. Set the password authentication to no to disable clear text passwords.

PasswordAuthentication no

3. Check that public key authentication is enabled, just to be safe and not get locked out from your server. If you do find yourself unable to log in with SSH, you can always use the Web terminal control panel.

PubkeyAuthentication yes

Then save and exit the editor.

4. Restart the SSH service to apply the changes by using the command below.

$ sudo systemctl restart sshd

With that done your cloud server is now another step along towards security. 

Malicious attempts to connect to your server will results in authentication rejection, as plain passwords are not allowed, and brute-forcing an RSA key is practically impossible.

Welcome to Emergency mode in Linux - Fix this boot error now

This article covers how to fix boot error, Welcome to Emergency mode in Linux. This issue happens after an emergency power outage on a server, a system crash, or similar situations.


The Emergency Mode sometime means that your file system may be corrupted.

In such cases, you will be left out with a prompt to go nowhere.

All you have to do is perform a file system check using,

fsck.ext4 /dev/sda3

where sda3 can be your partition and if you are using ext3 file system, change the command as follows:

fsck.ext3 /dev/sda3

About the partition number, Linux shows you the partition before arriving at the prompt.

This should solve the problem.


To fix  Emergency Mode On Ubuntu:

1. use Ubuntu Live USB to boot, and open terminal:

$ sudo fsck.ext4 /dev/sda3

2. Adding sudo because it needs root permission.

(Replace ext4 with ext3 if applicable to you)

3. Cycle through the SDAs by changing the last number in the sda to see which file system has problems.

Ex: sda1, sda2, sda3, sda4, and so on

4. As I encountered, the problem might be with the 'home' directory.

5. Once you run the above command, you'll be prompted to fix the issue right inside the terminal itself.

6. Keep hitting y (for yes) until the end of the fix.

(or you can use -fy for automatically response yes to all.)

7. Navigate to the home folder of your sda using your files explorer.

(This will be mounted from the HDD since you are working with a Live USB)

8. Check inside 'home' if you can see all your files. If yes, then you're ready to reboot to your system (remove the Live USB).

Map Network Drives or Shared Folders with Group Policy - How to do it

This article covers how to map network drives or shared folders with Group Policy.

Mapping network drives using Group Policy preferences is flexible, provides easy control over who receives the drive mappings, and has easy-to-use user interfaces, all of which are in stark contrast with the complexities associated with scripts.


To Set up drive mappings with Group Policy preferences:

1. Group Policy preferences are a set of extensions that increase the functionality of Group Policy Objects (GPOs). 

2. Administrators can use them to deploy and manage applications on client computers with configurations targeted to specific users. 

3. The Drive Maps policy in Group Policy preferences allows an administrator to manage drive letter mappings to network shares.


To Deploy item-level targeting with Group Policy preferences:

Item-level targeting (ILT) is a feature of Group Policy preferences that allows preference settings to be applied to individual users and/or computers dynamically. ILT allows an administrator to specify a list of conditions that must be met in order for a preference setting to be applied or removed to a user or computer object.

You can configure drive mapping, only users in the Product Managers group would receive the mapping. 

1. Under the Common tab of the mapped drive properties, check the Item-level targeting option, and then click Targeting….

2. In the Targeting Editor window, click New Item and select Security Group.

3. Click the … button, and type in the name of the security group.

4. Click OK to close the Targeting Editor once you're finished adding items to the list. 

resolvconf error resolv conf must be a symlink - Fix it Now

This article covers how to fix resolv.conf error which happens when we try to restart the BIND 9 server under Ubuntu Linux.


To fix Resolvconf error "resolvconf: Error: /etc/resolv.conf must be a symlink":

Open a terminal and run the following commands:

$ sudo rm /etc/resolv.conf

$ sudo ln -s ../run/resolvconf/resolv.conf /etc/resolv.conf

$ sudo resolvconf -u


As of Ubuntu 12.04 resolvconf is part of the base system.

You can recreate the needed symlink by running:

$ dpkg-reconfigure resolvconf

or by doing the following in a terminal.

$ sudo ln -nsf ../run/resolvconf/resolv.conf /etc/resolv.conf

Note that as of Ubuntu 12.10 resolvconf no longer aborts if /etc/resolv.conf is not a symlink. It does print a warning message, but this can be silenced by putting the line:

REPORT_ABSENT_SYMLINK=no

in /etc/default/resolvconf.

Segmentation fault in Nagios - Fix it Now

This article covers how to fix the Segmentation fault in Nagios.

A segmentation fault (aka segfault) is a common condition that causes programs to crash; they are often associated with a file named core .

Segfaults are caused by a program trying to read or write an illegal memory location.


What does segmentation fault mean in Linux?

A segmentation fault is when your program attempts to access memory it has either not been assigned by the operating system, or is otherwise not allowed to access. "segmentation" is the concept of each process on your computer having its own distinct virtual address space.


Typical causes of a segmentation fault:

1. Attempting to access a nonexistent memory address (outside process's address space)

2. Attempting to access memory the program does not have rights to (such as kernel 3. structures in process context)

4. Attempting to write read-only memory (such as code segment)


To fix Segmentation Fault (“Core dumped”) in Ubuntu:

1. Remove the lock files present at different locations.

2. Remove repository cache.

3. Update and upgrade your repository cache.

4. Now upgrade your distribution, it will update your packages.

5. Find the broken packages and delete them forcefully.

ERROR: PleskDBException: Unable to connect to database - Fix it now

This article covers methods to resolve error: pleskdbexception: unable to connect to database which triggers as a result of various reasons that include InnoDB engine corruption, disk space full, data directory not completely restored or recovered, and so on. 

The reason of this error is due to disk full and you need to delete extra file from my linux Server.

fslint is a Linux utility to remove unwanted and problematic cruft in files and file names and thus keeps the computer clean. A large volume of unnecessary and unwanted files are called lint. fslint remove such unwanted lint from files and file names.


To Clear RAM Memory Cache, Buffer and Swap Space on Linux:

1. Clear PageCache only. # sync; echo 1 > /proc/sys/vm/drop_caches.

2. Clear dentries and inodes. # sync; echo 2 > /proc/sys/vm/drop_caches.

3. Clear PageCache, dentries and inodes. # sync; echo 3 > /proc/sys/vm/drop_caches.

4. Sync will flush the file system buffer. Command Separated by “;” run sequentially.


To find largest files including directories in Linux is as follows:

1. Open the terminal application.

2. Login as root user using the sudo -i command.

3. Type du -a /dir/ | sort -n -r | head -n 20.

4. du will estimate file space usage.

5. sort will sort out the output of du command.


To uninstall an RPM package:

1. Execute the following command to discover the name of the installed package: rpm -qa | grep Micro_Focus. This returns PackageName , the RPM name of your Micro Focus product which is used to identify the install package.

2. Execute the following command to uninstall the product: rpm -e [ PackageName ]

isc-dhcp-server Job failed to start - Resolve it now

This article covers method to resolve DHCP 'isc-dhcp-server: Job failed to start' error. Basically, 'isc-dhcp-server: Job failed to start' error can happen if there is any issues with the commands that we run.


You can try to restart the service; if it really is an issue with the service starting before the network is up restarting it once the network is up should work:

$ sudo systemctl start restart isc-dhcp-server.service


If that doesn't work then try and investigate further why it's not starting by first getting the current status of the service:

$ sudo systemctl status isc-dhcp-server.service


That should also give you a PID for which you can further investigate with journaltctl where XXXX is the PID of the service:

$ journalctl _PID=XXXXX


Also, what caused/led you to do the following? Perhaps try undoing those changes as I'm not sure if that's helping or hurting. Was the /etc/init/isc-dhcp-server.conf file already there or did you manually create it?

So add a "Sleep 30" to the /etc/init/isc-dhcp-server.conf file. Also add " up service dhcp3-server restart " to my  /etc/network/interfaces file. 

Files and Processes in SELinux on CentOS 7 - More information

This article covers Files and Processes in SELinux. Basically, managing file and process context are at the heart of a successful SELinux implementation.

With SELinux, a process or application will have only the rights it needs to function and NOTHING more. The SELinux policy for the application will determine what types of files it needs access to and what processes it can transition to. 

SELinux policies are written by app developers and shipped with the Linux distribution that supports it. A policy is basically a set of rules that maps processes and users to their rights.


SELinux enforces something we can term as “context inheritance”. What this means is that unless specified by the policy, processes and files are created with the contexts of their parents.

So if we have a process called “proc_a” spawning another process called “proc_b”, the spawned process will run in the same domain as “proc_a” unless specified otherwise by the SELinux policy.


SELinux in Action: Testing a File Context Error

1. First, let's create a directory named www under the root. We will also create a folder called html under www:

mkdir -p /www/html

 

2. If we run the ls -Z command, we will see these directories have been created with the default_t context:

ls -Z /www/

drwxr-xr-x. root root unconfined_u:object_r:default_t:s0 html


3. Next we copy the contents of the /var/www/html directory to /www/html:

cp /var/www/html/index.html /www/html/

 

The copied file will have a context of default_t. That's the context of the parent directory.


We now edit the httpd.conf file to point to this new directory as the web site's root folder. 

i. We will also have to relax the access rights for this directory.

vi /etc/httpd/conf/httpd.conf

ii. First we comment out the existing location for document root and add a new DocumentRoot directive to /www/html:

# DocumentRoot "/var/www/html"

DocumentRoot "/www/html"

iii. We also comment out the access rights section for the existing document root and add a new section:

#<Directory "/var/www">

#    AllowOverride None

    # Allow open access:

#    Require all granted

#</Directory>


<Directory "/www">

    AllowOverride None

    # Allow open access:

    Require all granted

</Directory>


We leave the location of the cgi-bin directory as it is. We are not getting into detailed Apache configuration here; we just want our site to work for SELinux purposes.


iv. Finally, restart the httpd daemon:

service httpd restart

 

Once the server has been restarted, accessing the web page will give us the same “403 Forbidden” error (or default “Testing 123” page) we saw before.

The error is happening because the index.html file's context changed during the copy operation. It needs to be changed back to its original context (httpd_sys_content_t).

restorecond Will not restore a file with more than one hard link - How to resolve this issue

This article covers Tips to fix 'restorecond: Will not restore a file with more than one hard link' error.

To fix this problem type the following commands:

# rm /etc/sysconfig/networking/profiles/default/resolv.conf

# restorecon /etc/resolv.conf

# ln /etc/resolv.conf /etc/sysconfig/networking/profiles/default/resolv.conf

dhclient to persistently look for an IP address lease - Configure it Now

This article covers how to use dhclient command. Basically, Linux dhclient command can provide an IP lease until DHCP Server/Router grants one.

With this guide, you can easily configure Linux dhclient command to continuously requests an IP lease until one is granted by DHCP Server / Router.

DHCP Client Error mv cannot move - Fix this error now

This article covers how to fix DHCP Client Error: mv cannot move. This DHCP error indicate that dhclient (Dynamic Host Configuration Protocol Client) is not able to update your name resolution configuration file /etc/resolv.conf. 


To fix this DHCP error:

1. Run dhclient as root user

Use sudo command to run dhclient, enter:

$ sudo dhclient eth0

2. Make sure /etc/resolv.conf is not write protected

Use lsattr, command to view file attributes:

$ lsattr /etc/resolv.conf

Clear i attribute, enter:

$ sudo chattr -i /etc/resolv.conf

3. Now run dhclient again to update file and to obtained new IP address.

Create CentOS Fedora RHEL VM Template on KVM - How to do it

This article covers how to create CentOS/Fedora/RHEL VM Templates on KVM. VM Templates are more useful when deploying high numbers of similar VMs that require consistency across deployments. If something goes wrong in an instance created from the Template, you can clone a fresh VM from the template with minimal effort.


To install KVM in your Linux system:

The KVM service (libvirtd) should be running and enabled to start at boot.

$ sudo systemctl start libvirtd

$ sudo systemctl enable libvirtd

Enable vhost-net kernel module on Ubuntu/Debian.

$ sudo modprobe vhost_net

# echo vhost_net | sudo tee -a /etc/modules


How to Prepare CentOS / Fedora / RHEL VM template ?

1. Update system

After you finish VM installation, login to the instance and update all system packages to the latest versions.

$ sudo yum -y update

2. Install standard basic packages missing:

$ sudo yum install -y epel-release vim bash-completion wget curl telnet net-tools unzip lvm2 

3. Install acpid and cloud-init packages.

$ sudo yum -y install acpid cloud-init cloud-utils-growpart

$ sudo sudo systemctl enable --now acpid

4. Disable the zeroconf route

$ echo "NOZEROCONF=yes" | sudo tee -a /etc/sysconfig/network

5. Configure GRUB_CMDLINE_LINUX – For Openstack usage.

If you plan on exporting template to Openstack Glance image service, edit the /etc/default/grub file and configure the GRUB_CMDLINE_LINUX option. Your line should look like below – remove rhgb quiet and add console=tty0 console=ttyS0,115200n8.

GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=cl/root rd.lvm.lv=cl/swap console=tty0 console=ttyS0,115200n8"

Generate grub configuration.

$ sudo grub2-mkconfig -o /boot/grub2/grub.cfg

6. Install other packages you need on your baseline template.

7. When done, power off the virtual machine.


How to Clean VM template ?

You need virt-sysprep tool for cleaning the instance.

$ sudo virt-sysprep -d centos7

Add Compute Host to oVirt Virtualization - How to do it

This article covers oVirt Virtualization and how to add Compute Host to oVirt Virtualization. oVirt is a free and open-source distributed virtualization solution that can be used to manage your entire infrastructure.

oVirt allows you to manage virtual machines, compute, storage and networking resources from the web-based interface. It uses KVM hypervisor and built upon several other community projects, including libvirt, Gluster, PatternFly, and Ansible.


To Add Compute Host to oVirt:

1. Validate oVirt Engine installation by logging into the console.

2. Navigate to Compute > Hosts > New and fill all required information.

3. Modify other settings in the left panel as you see fit and click "OK" button to provision the node.

4. The Status should change to Installing and will finish in few minutes.


To Configure Host Networking:

1. If you want to add additional networks – extra bridges with VLANs e.t.c, this can be done once the host is added.

2. First create a Logical Network on Network > New. Give Virtual Network correct details. For VLAN ID check "Enable VLAN tagging".

3. With the Host added and active you can configure its networking under Network Interfaces > Setup Host Networks.

4. Assign the Logical network to an interface.

5. Drag Virtual Network for mapping to host interface.

6. Configure IP addressing if required.

7. Once saved and successful it should turn green.

8. You can then proceed to create virtual machines using oVirt Management interface.

Icons images and javascript files missing from frontend in Magento - Fix it now

This article covers methods to resolve Magento error related to with icons, images, and javascript files missing from frontend. Basically, icons, images, and javascript files might not appear if we use the wrong Magento update command to upgrade the version.


If you are in production mode, and are running setup:upgrade and you don't want regenerate static content because there is no need to, then you can use the following:

php bin/magento setup:upgrade --keep-generated

--keep-generated is an optional argument that does not update static view files. 

It should be used only in production mode. 

It should not be used in developer mode.


Solutions for the issue where stylesheets and images do not load after installing Magento ?

The following are possible solutions depending on the software you use and the cause of the problem:

If you are using the Apache web server, verify your server rewrites setting and your Magento server's base URL and try again. If you set up the Apache AllowOverride directive incorrectly, the static files are not served from the correct location.

If you are using the nginx web server, be sure to configure a virtual host file. The nginx virtual host file must meet the following criteria:

The include directive must point to the sample nginx configuration file in your Magento installation directory. For example:

include /var/www/html/magento2/nginx.conf.sample;

The server_name directive must match the base URL you specified when installing Magento. For example:

server_name ip_address;

If the Magento application is in production mode, try deploying static view files using the command magento setup:static-content:deploy.

PHP Handlers for your Server - Which is suitable

This article covers the pros and cons of different PHP Handlers. Basically, selecting the proper PHP handler plays a major role in the server's stability and performance. 

Apache does not natively support PHP scripts without a special module. The module that tells Apache how to handle PHP scripts is referred to as a PHP handler. 

Without a properly configured module, Apache will just send you the PHP file as a download since it doesn't know what else to do.


How does each PHP handler work and what are the pros and cons :

1. DSO/Apache Module

This is also referred to as mod_php. This module allows Apache itself to directly parse and display PHP files. PHP scripts parsed by mod_php run as the same user that Apache itself does (rather than the user account that hosts the PHP files.


Pros

i. One of the fastest handlers available.

ii. Works with mod_ruid2 or mpm_itk modules.


Cons

i. Only works with a single version of PHP on cPanel servers (you'll need to use other handers for other versions of PHP if you offer them).

ii. Scripts run as the Apache user rather than the owner of the domain or subdomain. For example, on a cPanel server, if the script creates a file or directory, that file will be owned by the user "nobody" which can cause problems when the account owner goes to backup or remove them.


2. CGI

Stands for Common Gateway Interface. Using this handler, the system will run PHP scripts as the user that owns the domain or subdomain.


Pros

Scripts run as the domain or subdomain user, not as the Apache user.


Cons

i. One of the slowest handers.

ii. Doesn't work well with PHP opcode caching.

iii. Cannot put PHP configuration changes in an .htaccess file.


3. FCGI/FastCGI

FastCGI is a variation of the CGI protocol that provides a number of benefits over the older CGI handler. Using this module, the system will run PHP scripts as the user that owns the domain or subdomain. There are some differences between mod_fastcgi and mod_fcgid, but none that are relevant to the scope of this article.


Pros

i. Scripts run as the domain or subdomain user, not as the Apache user.

ii. Very fast handler.

iii. Works with PHP opcode caching.


Cons

i. This handler uses more memory than most of the others.

ii. Cannot put PHP configuration changes in an .htaccess file.


4. PHP-FPM

FPM stands for FastCGI Process Manager. It is an improved way of implementing FastCGI processing of PHP.  Using this handler, the system will run PHP scripts as the user that owns the domain or subdomain. Each FPM pool can have independent settings.


Pros

i. Scripts run as the domain or subdomain user, not as the Apache user.

ii. One of the fastest PHP handlers.

iii. Works with PHP opcode caching.

iv. Allows for some additional level of flexibility per pool.


Cons

i. This handler can use more memory than any other handler listed here, but that depends on the number of sites using PHP-FPM and the configuration of the FPM pool.

ii. Can be somewhat more complicated to manage.

iii. Cannot put PHP configuration changes in an .htaccess file and some directives can only be changed on a global level.


5. suPHP

This handler was specifically designed to serve PHP scripts as the owner of the domain or subdomain that is executing the PHP script. On cPanel servers, it is also configured to disallow execution of files with unsafe permissions. cPanel their copy of suPHP with the latest security fixes.


Pros

i. Scripts run as the domain or subdomain user, not as the Apache user.

ii. cPanel configures suPHP so that it blocks accessing or executing any files or directories with permissions higher than 755 for security.


Cons

i. Slowest PHP handler in most cases.

ii. PHP Opcode caching has no performance improvement and only wastes memory.

iii. Cannot put PHP configuration changes in an .htaccess file.


6. LSAPI

This handler implements the LiteSpeed Web Server (LSWS) SAPI. This handler requires CloudLiunx or LSWS for the maximum benefits. Using this handler, the system will run PHP scripts as the user that owns the domain or subdomain.


Pros

i. Designed to perform as well or better than PHP-FPM under certain circumstance.

ii. Less memory use than most other handlers.

iii. Scripts run as the domain or subdomain user, not as the Apache user.

iv. No special configuration required.

v. Can read PHP values out of a .htaccess file.


Cons

i. You don't get full benefits without purchasing a third-party commercial product.

ii. Not compatible with mod_ruid2 or mpm_itk (but it shouldn't need them).

Show dropped packets per interface on Linux - Methiods to check it

This article covers how to Show dropped packets per interface on Linux. 

There can be various reasons for packet loss. It can be that the network transport is unreliable and packet loss is natural, the network link could be congested, applications cannot handle the offered load.

Sometimes there are too many packets, they are saved to a buffer, but they are saved faster than processed, so eventually the buffer runs out of space, so the kernel drops all further packets until there is some free space in the buffer.


You will learn the different Linux commands to see packet loss on Linux per-interface, including excellent tools such as dropwatch. 

We can also use Linux profiling with performance counters utility called perf.


To display show dropped packets per interface on Linux using the netstat:

The netstat command is mostly obsolete. Replacement for netstat is ss and ip command. 

However, netstat still available on older Linux distros, which are in productions. 

Hence, I will start with netstat but if possible, use the ip/ss tools. 

The command in Linux is:

$ netstat -i

$ netstat --interfaces


To display summary statistics for each protocol, run:

$ netstat -s

$ netstat --statistics


To show dropped packets statistics per network interface on Linux using the ip:

Let us see how to see link device stats using the ip command. 

The syntax is:

$ ip -s link

$ ip -s link show {interface}

$ ip -s link show eth0

SELinux users on CentOS 7 – Actions and Deciphering error messages

This article covers more information about SELinux users on CentOS 7.


Deciphering SELinux Error Messages

We looked at one SELinux error message. We were then using the grep command to sift through /var/log/messages file. Fortunately SELinux comes with a few tools to make life a bit easier than that. These tools are not installed by default and require installing a few packages, which you should have installed in the first part of this tutorial.

The first command is ausearch. We can make use of this command if the auditd daemon is running. In the following code snippet we are trying to look at all the error messages related to the httpd daemon. Make sure you are in your root account:

ausearch -m avc -c httpd

In our system a number of entries were listed, but we will concentrate on the last one:

----
time->Thu Aug 21 16:42:17 2014
...
type=AVC msg=audit(1408603337.115:914): avc:  denied  { getattr } for  pid=10204 comm="httpd" path="/www/html/index.html" dev="dm-0" ino=8445484 scontext=system_u:system_r:httpd_t:s0 tcontext=unconfined_u:object_r:default_t:s0 tclass=file

Even experienced system administrators can get confused by messages like this unless they know what they are looking for. To understand it, let’s take apart each of the fields:

type=AVC and avc: AVC stands for Access Vector Cache. SELinux caches access control decisions for resource and processes. This cache is known as the Access Vector Cache (AVC). That's why SELinux access denial messages are also known as “AVC denials”. These two fields of information are saying the entry is coming from an AVC log and it’s an AVC event.


denied { getattr }: The permission that was attempted and the result it got. In this case the get attribute operation was denied.

pid=10204. This is the process id of the process that attempted the access.

comm: The process id by itself doesn’t mean much. The comm attribute shows the process command. In this case it’s httpd. Immediately we know the error is coming from the web server.

path: The location of the resource that was accessed. In this case it’s a file under /www/html/index.html.

dev and ino: The device where the target resource resides and its inode address.

scontext: The security context of the process. We can see the source is running under the httpd_t domain.

tcontext: The security context of the target resource. In this case the file type is default_t.

tclass: The class of the target resource. In this case it’s a file.

Disable database UTF8 connectivity on Nagios - How to do it

This article covers how to configure disable UTF8 connectivity to the MySQL/MariaDB databases. By default Nagios XI uses UTF8 however sometimes this needs to be disabled to allow MySQL / MariaDB to define the connectivity method.

This configuration ensures that characters from different languages can be correctly stored and retrieved in the databases.


The Nagios XI Configuration Directive

The following configuration directive was added in Nagios XI 5.4.13:

$cfg['db_conn_utf8'] = 0;

 To determine if you currently have that directive enabled, establish a terminal session to your Nagios XI server as the root user and execute the following command:

$ grep db_conn_utf8 /usr/local/nagiosxi/html/config.inc.php

 If the grep command produces NO output then the directive does not exist in your configuration and it needs to be added. This can be added with the following command:

$ printf "\n\$cfg['db_conn_utf8'] = 0;\n" >> /usr/local/nagiosxi/html/config.inc.php

 

If the grep command produced output then it can be changed with the following command (sets it to 0):

$sed -i "s/db_conn_utf8'\] =.*/db_conn_utf8'\] = 0;/g" /usr/local/nagiosxi/html/config.inc.php

Defining the directive to 0 will resolve the issue you for garbled or ??? characters.


If you wanted to change it to 1 then use the following command:

$sed -i "s/db_conn_utf8'\] =.*/db_conn_utf8'\] = 1;/g" /usr/local/nagiosxi/html/config.inc.php

 

The change takes effect immediately.

Set up Amazon CloudFront with WordPress site - Do it now

This article covers how to set up Amazon CloudFront with WordPress site. WordPress performs reasonably well out of the box, but there is room for improvement—the number of WordPress plugins that address performance is evidence of this.  However, the easiest way to improve the user experience is to accelerate one's entire WordPress website by using CloudFront. 

Doing this will not only improve your site's responsiveness, but it may also reduce the overall cost of operating your WordPress infrastructure, as reducing the load on your web servers may help you scale down the required infrastructure. 

In fact, CloudFront can significantly help your site cope with an unexpected load when your site gets popular.


How does CloudFront help?

Many AWS customers have users spread across the globe that they want to reach. However, what once required an immense engineering effort can now be easily built using AWS Regions and Edge locations, which allow you to serve content from the locations closest to those users.

Data transfers on the internet depend largely on global networks of fiber optic cables, allowing very high bandwidth data transfers. 


As the speed of light is proving a difficult challenge to overcome, Amazon CloudFront improves the experience for users accessing your websites in several other ways, including:

1. Anycast DNS ensures your customers are routed to the nearest edge location.

2. Cached content, when available, is delivered to your users from the edge location.

3. When data needs to be fetched from your site CloudFront optimizes network throughput by managing the transfers between Edge Locations and your website.  This traffic runs over the Amazon Global Backbone, where optimized TCP configuration ensures more bytes are in-flight on the network, improving throughput, while TCP connection re-use eliminates much of the latency associated with establishing connections.   In this way, whether content is cached or not, it will be accelerated by delivery over optimized network paths.

4. Finally, negotiating and offloading Transport Layer Security (TLS) at the CloudFront Edge further improves performance, reducing connection setup latency, and further supporting back-end connection re-use.

Force DHCP Client to Renew IP Address - Perform it now

This article covers how to force DHCP client to renew IP address. You need to use Dynamic Host Configuration Protocol Client i.e., dhclient command. 

The client normally doesn't release the current lease as it is not required by the DHCP protocol. Some cable ISPs require their clients to notify the server if they wish to release an assigned IP address. 

The dhclient command, provides a means for configuring one or more network interfaces using the Dynamic Host Configuration Protocol, BOOTP protocol, or if these protocols fail, by statically assigning an address.


Linux renew ip command using dhcp:

The -r flag explicitly releases the current lease, and once the lease has been released, the client exits. 

For example, open terminal application and type the command:

$ sudo dhclient -r

Now obtain fresh IP address using DHCP on Linux:

$ sudo dhclient


To start DHCP client in Linux:

1. To start the DHCP service, type the following command: # /etc/init.d/dhcp start.

2. To stop the DHCP service, type the following command: # /etc/init.d/dhcp stop. 

The DHCP daemon stops until it is manually started again, or the system reboots.


How can I renew or release an IP in Linux for eth0?

To renew or release an IP address for the eth0 interface, enter:

$ sudo dhclient -r eth0

$ sudo dhclient eth0

In this example, I am renewing an IP address for my wireless interface:

sudo dhclient -v -r eth0

sudo dhclient -v eth0


Command to release/renew a DHCP IP address in Linux:

1. ip a - Get ip address and interface information on Linux

2. ip a s eth0 - Find the current ip address for the eth0 interface in Linux

3. dhclient -v -r eth0 - Force Linux to renew IP address using a DHCP for eth0 interface

4. systemctl restart network.service - Restart networking service and obtain a new IP address via DHCP on Ubuntu/Debian Linux

5. systemctl restart networking.service - Restart networking service and obtain a new IP address via DHCP on a CentOS/RHEL/Fedora Linux

6. nmcli con - Use NetworkManager to obtain info about Linux IP address and interfaces

7. nmcli con down id 'enp6s0' - Take down Linux interface enp6s0 and release IP address in Linux

8. nmcli con up id 'enp6s0' - Obtian a new IP address for Linux interface enp6s0 and release IP address using DHCP

MongoDB service is not starting up - Fix it now

This article covers how to resolve the the problem of starting MongoDB server when running the command mongod which may arise due to file permission or ownership issues.


The reason was the dbpath variable in /etc/mongodb.conf. 

To fix, you only had to change the owner of the /data/db directory recursively.

Also For ubunto , what made it happen and was real simple is to install mongodb package:

$sudo apt-get install  mongodb


Also, You can use the below-mentioned command for mongodb service is not starting up:-

$sudo rm /var/lib/mongodb/mongod.lock 

$mongod --repair 

$sudo service mongodb start


Mongodb service is not starting up:

This can also happen if your file permissions get changed somehow. 

Removing the lock file didn't help, and we were getting errors in the log file like:

2016-01-20T09:14:58.210-0800 [initandlisten] warning couldn't write to / rename file /var/lib/mongodb/journal/prealloc.0: couldn't open file    /var/lib/mongodb/journal/prealloc.0 for writing errno:13 Permission denied

2016-01-20T09:14:58.288-0800 [initandlisten] couldn't open /var/lib/mongodb/local.ns errno:13 Permission denied

2016-01-20T09:14:58.288-0800 [initandlisten] error couldn't open file /var/lib/mongodb/local.ns terminating


So, went to check permissions:

ls -l /var/lib/mongodb

total 245780

drwxr-xr-x 2 mongodb mongodb     4096 Jan 20 09:14 journal

drwxr-xr-x 2 root    root        4096 Jan 20 09:11 local

-rw------- 1 root    root    67108864 Jan 20 09:11 local.0

-rw------- 1 root    root    16777216 Jan 20 09:11 local.ns

-rwxr-xr-x 1 mongodb nogroup        0 Jan 20 09:14 mongod.lock

Log Suspicious Martian Packets Un-routable Source Addresses in Linux

This article covers how to block and log suspicious martian packets on Linux servers.


Log Suspicious Martian Packets in Linux:

On the public Internet, such a packet's (Martian) source address is either spoofed, and it cannot originate as claimed, or the packet cannot be delivered. 

Both IPv4 and IPv6, martian packets have a source or destination addresses within special-use ranges as per RFC 6890.


Benefits of logging of martians packets:

As I said earlier a martian packet is a packet with a source address that cannot be routed over the public Internet. 

Such a packet is waste of resources on your server. 

Often martian and unroutable packet used for a dangerous purpose or DoS/DDOS your server. 

So you must drop bad martian packet earlier and log into your server for further inspection.


To log Martian packets on Linux?

You need to use sysctl command command to view or set Linux kernel variables that can logs packets with un-routable source addresses to the kernel log file such as /var/log/messages.


To log suspicious martian packets on Linux:

You need to set the following variables to 1 in /etc/sysctl.conf file:

net.ipv4.conf.all.log_martians

net.ipv4.conf.default.log_martians


Edit file /etc/sysctl.conf, enter:

# vi /etc/sysctl.conf

Append/edit as follows:

net.ipv4.conf.all.log_martians=1 

net.ipv4.conf.default.log_martians=1


Save and close the file.

To load changes, type:

# sysctl -p

Redirect FreeBSD Console To A Serial Port for KVM Virsh - How to do it

This article covers how to redirect FreeBSD in KVM to the serial port.

FreeBSD does support a dumb terminal on a serial port as a console.


This is useful for quick login or debug guest system problem without using ssh. 

1. First, login as root using ssh to your guest operating systems:

$ ssh ibmimedia@freebsd.ibmimedia.com

su -

2. Edit /boot/loader.conf, enter:

# vi /boot/loader.conf

3. Append the following entry:

console="comconsole"

4. Save and close the file. Edit /etc/ttys, enter:

# vi /etc/ttys

5. Find the line that read as follows:

ttyd0  "/usr/libexec/getty std.9600"   dialup  off secure

6. Update it as follows:

ttyd0   "/usr/libexec/getty std.9600"   vt100   on secure

7. Save and close the file. Reboot the guest, enter:

# reboot

8. After reboot, you can connect to FreeBSD guest as follows from host (first guest the list of running guest operating systems):

# virsh list

Sample outputs:


 Id Name                 State

----------------------------------

  3 ographics            running

  4 freebsd              running

9. Now, connect to Freebsd guest, enter:

virsh console 4

OR

virsh console freebsd

PXE Boot or DHCP Failure on Guest - Fix it now

This article covers how to fix PXE Boot (or DHCP) Failure on Guest.

Nature of this error:

A guest virtual machine starts successfully, but is then either unable to acquire an IP address from DHCP or boot using the PXE protocol, or both. There are two common causes of this error: having a long forward delay time set for the bridge, and when the iptables package and kernel do not support checksum mangling rules.


Cause of PXE BOOT (OR DHCP) ON GUEST FAILED:

Long forward delay time on bridge.

This is the most common cause of this error. If the guest network interface is connecting to a bridge device that has STP (Spanning Tree Protocol) enabled, as well as a long forward delay set, the bridge will not forward network packets from the guest virtual machine onto the bridge until at least that number of forward delay seconds have elapsed since the guest connected to the bridge. This delay allows the bridge time to watch traffic from the interface and determine the MAC addresses behind it, and prevent forwarding loops in the network topology. If the forward delay is longer than the timeout of the guest's PXE or DHCP client, then the client's operation will fail, and the guest will either fail to boot (in the case of PXE) or fail to acquire an IP address (in the case of DHCP).


Fix to PXE BOOT (OR DHCP) ON GUEST FAILED:

If this is the case, change the forward delay on the bridge to 0, or disable STP on the bridge.

This solution applies only if the bridge is not used to connect multiple networks, but just to connect multiple endpoints to a single network (the most common use case for bridges used by libvirt).


If the guest has interfaces connecting to a libvirt-managed virtual network, edit the definition for the network, and restart it. 

For example, edit the default network with the following command:

# virsh net-edit default

Add the following attributes to the <bridge> element:

<name_of_bridge='virbr0' delay='0' stp='on'/>

XML


If this problem is still not resolved, the issue may be due to a conflict between firewalld and the default libvirt network.

To fix this, stop firewalld with the service firewalld stop command, then restart libvirt with the service libvirtd restart command.

Install ClickHouse on Ubuntu 20.04 - Step by step process to perform it

This article covers how to install ClickHouse on Ubuntu. Basically, ClickHouse is an open-source analytics database developed for big data use cases. 

Install of ClickHouse on Ubuntu involves a series of steps that includes adjusting the configuration file to enable listening over other IP address and remote access. 


Column-oriented databases store records in blocks grouped by columns instead of rows. 

By not loading data for columns absent in the query, column-oriented databases spend less time reading data while completing queries. 

As a result, these databases can compute and return results much faster than traditional row-based systems for certain workloads, such as OLAP.


Online Analytics Processing (OLAP) systems allow for organizing large amounts of data and performing complex queries. 

They are capable of managing petabytes of data and returning query results quickly. 

In this way, OLAP is useful for work in areas like data science and business analytics.


Aggregation queries are queries that operate on a set of values and return single output values. 

In analytics databases, these queries are run frequently and are well optimized by the database. 


Some aggregate functions supported by ClickHouse are:

1. count: returns the count of rows matching the conditions specified.

2. sum: returns the sum of selected column values.

3. avg: returns the average of selected column values.


Some ClickHouse-specific aggregate functions include:

1. uniq: returns an approximate number of distinct rows matched.

2. topK: returns an array of the most frequent values of a specific column using an approximation algorithm.


You can set up a ClickHouse database instance on your server and create a database and table, add data, perform queries, and delete the database.

You can start, stop, and check the ClickHouse service with a few commands.

To start the clickhouse-server, use:

$ sudo systemctl start clickhouse-server

The output does not return a confirmation.

To check the ClickHouse service status, enter:

$ sudo systemctl status clickhouse-server

To stop the ClickHouse server, run this command:

$ sudo systemctl stop clickhouse-server

To enable ClickHouse on boot:

$ sudo systemctl enable clickhouse-server

To start working with ClickHouse databases, launch the ClickHouse client. 

When you start a session, the procedure is similar to other SQL management systems.

To start the client, use the command:

$ clickhouse-client

How to setup AWS CloudFront and how it delivers content

This article covers how to setup AWS CloudFront. Basically, CloudFront retrieves data from the Amazon S3 bucket and distributes it to multiple datacenter locations.

Amazon CloudFront works seamlessly with Amazon Simple Storage Service (S3) to accelerate the delivery of your web content and reduce the load on your origin servers. 


Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content, such as . html, . css, . js, and image files, to your users.


Benefit of CloudFront?

Great Performance. The AWS CloudFront content delivery network optimizes for low latency and high information transfer speeds. 

CloudFront's intelligent routing predicate on real-world latency measurements incessantly gathered from standard websites, as well as Amazon.com.


Step by step process on getting started in the AWS Console, configuring your origin, and beginning testing your CloudFront distribution:

1. Go to the AWS Console

2. Create an Amazon S3 bucket

3. Create an Amazon CloudFront distribution

4. Specify your distribution settings

5. Configure your origin

6. Configure Origin Access Identity

7. Configure default cache behavior

8. Configure your TTLs

9. Configure additional features

10. Test your CloudFront distribution

Troubleshoot DNS issues - Step by Step tips to resolve it

This article covers how to troubleshoot DNS issues. One of the handiest tools for troubleshooting DNS failures is the NSLOOKUP command, which you can access from a Windows Command Prompt window. Simply type NSLOOKUP followed by the name of the host for which you want to test the name resolution.

Basically, DNS errors are caused by problems on the user end, whether that's with a network or internet connection, misconfigured DNS settings, or an outdated browser. They can also be attributed to a temporary server outage that renders the DNS unavailable.


DNS: online name resolution:

The domain name system (DNS) is a directory service used for transforming alphanumeric domain names into numeric IP addresses. 

A decentralized process, name resolution generally takes place on DNS servers’ networks distributed throughout the world. 

Every internet address you enter into your web browser’s search bar is then forwarded by your router to a DNS server. This server then dissolves the domain name into a numeric sequence and returns a corresponding IP address. 

Should the DNS server fail to produce an answer, then it won’t be possible to access the desired website; the result is the error message ‘DNS server not responding’.


To  clear your DNS cache:

1. On your keyboard, press Win+X to open the WinX Menu.

2. Right-click Command Prompt and select Run as Administrator.

3. Run the following command: ipconfig /flushdns.


To Troubleshoot DNS issues:

The root of such irritating messages can often be traced back to the server outage. In such cases, the DNS server is temporarily unavailable. Most of the time, these problems can be corrected by changing browsers, switching a few of your firewall settings, or restarting your router.

1. In order to rule out that the connection problem isn’t being caused by your web browser, carry out a test by attempting to logon on to the desired web page with alternative applications.

2. In case you aren’t able to achieve your desired results simply by changing browsers, then the next step is to rule out Windows Firewall as the possible culprit.

3. Connection problems can often be solved by restarting the server. Most devices include a power button specifically for this purpose. Should this fail to yield any results, then it looks like a hard reboot may be in store; this is done simply by pulling out the power plug. 

4. If you have ruled out common causes of error such as the router software crashes or conflicts with Windows Firewall, then changing your DNS server could be the solution.



How to Check DNS server?

You can find out whether changing DNS server has solved the problem by carrying out a simple test. 

Enter the URL of a well-known site in your browser (e.g. www.google.com). 

If the site can be accessed it means the DNS server is functioning properly.

If the site can't be accessed, you can enter the following IP address into your browser: 172.217.16.195. 

This is one of Google’s IP addresses. If Google doesn't appear after entering the address, it probably means there's a general internet problem rather than a problem with the DNS server.

Unable to add MySQL database in Plesk Customer Panel - Fix it now

This article covers how to fix issues that make it unable to add MySQL database in Plesk customer panel.


To resolve Cannot add MySQL database in Plesk Customer Panel:

1. Log in to Plesk and apply one of the following:

2. Enable Database server selection directive in Service Plans > Default > Permissions > Show more available permissions and press Update & Sync.

3. Switch MySQL default database server from None to localhost:3306 in Service Plans > Default > Hosting Parameters > Default Database Server and press Update & Sync.


Note:  it is also applicable to cases when Amazon RDS extension is installed. 

If it is needed to provide customer with the ability to select Amazon server, enable "database server selection" option.


How do I add a database to my Plesk Panel?

How to Create a New Database or Database User in the Plesk Control Panel

1. Log into your Control Panel.

2. Click on Databases.

3. Click on the Add New Database icon.

4. Next to Database Name enter the name you want to use.

For Type, choose either Microsoft SQL Server or MySQL (DNN uses the Microsoft SQL Server.).

Unable to add MS SQL database in Plesk - Fix it now

This article covers how to fix the error, Unable to add MS SQL database in Plesk.

Basically, the number of MS SQL databases is limited for the webspace, subscriptions, or reseller's plan.


In Web Admin Edition:

1. Log in to Plesk.

2. Go to Tools & Settings > License Management and check if Microsoft SQL Server support enabled or not:

a. If no, then it is required to purchase the MSSQL support first for the subscription.

MSSQL support is included in the Power Pack and Developer Pack.

b. If yes, then proceed to step 3.

3. Run the command below to get the current limit of MSSQL databases for the required webspace:

C:\> plesk bin subscription_settings --info example.com | findstr max_mssql_db

max_mssql_db 30 MS SQL databases

4. Increase the number of MSSQL databases for the required webspace (use the "-1" to set to the Unlimited value):

C:\> plesk bin subscription_settings -u example.com -max_mssql_db 100

plesk bin subscription_settings -u example.com -max_mssql_db -1


In Web Pro and Web Host Editions:

1. Log in to Plesk.

2. Go to Subscriptions > example.com > Account > Resources.

3. Find the MS SQL databases number.

a. If it reached its limit, increase it of the following ways:

Go to Subscriptions > example.com > Customize > Resources page and increase the MS SQL databases limit (changes will affect only this subscription).

Go to Subscriptions > example.com > Service Plan: Default > Resources page and increase the MS SQL databases limit (changes will affect all subscriptions assigned to this service plan).

b. If it is not reached the limit, then this limit is set on the reseller's level as also. Follow to the next step.

4. Go to Subscriptions > example.com > Subscriber: John Doe > Provider: Jane Doe.

5. Click the Change Plan button to increase the MS SQL databases number for all resellers assigned to this service plan.

Click the Customize button to increase the MS SQL databases number only for this reseller.

6. Find the MS SQL databases number and increase it to the required value.

Unable to allow access for disk path in libvirtd - Fix it Now

This article covers tips to fix the error Unable to allow access for disk path in libvirtd. By default, migration only transfers the in-memory state of a running guest (such as memory or CPU state). Although disk images are not transferred during migration, they need to remain accessible at the same path by both hosts.


To fix Unable to allow access for disk path in libvirtd error:

Set up and mount shared storage at the same location on both hosts. The simplest way to do this is to use NFS:

1. Set up an NFS server on a host serving as shared storage. The NFS server can be one of the hosts involved in the migration, as long as all hosts involved are accessing the shared storage through NFS.

# mkdir -p /exports/images
# cat >>/etc/exports <<EOF
/exports/images    192.168.122.0/24(rw,no_root_squash)
EOF


2. Mount the exported directory at a common location on all hosts running libvirt. For example, if the IP address of the NFS server is 192.168.122.1, mount the directory with the following commands:

# cat >>/etc/fstab <<EOF
192.168.122.1:/exports/images  /var/lib/libvirt/images  nfs  auto  0 0
EOF
# mount /var/lib/libvirt/images

sudo sorry you must have a tty to run sudo - Fix it now

This article covers how to resolve the error sudo: sorry you must have a tty to run sudo which happens because the sudo command tries to execute a command that requires a tty. 


To fix "sudo: sorry, you must have a tty to run sudo" error:

You have to run your ssh command as follows to avoid error that read as sudo: Sorry, you must have a tty to run sudo Error:

ssh -t hostname sudo command

ssh -t user@hostname sudo command

ssh -t user@box.example.com sudo command1 /path/to/file


The -t option force pseudo-tty allocation. This can be used to execute arbitrary screen-based programs on a remote machine, which can be very useful, e.g., when implementing menu services. 

Multiple -t options force tty allocation, even if ssh has no local tty.


The requiretty option in sudoers file

The requiretty if set in sudo config file sudoers, sudo will only run when the user is logged in to a real tty. 

When this flag is set, sudo can only be run from a login session and not via other means such as cron, shell/perl/python or cgi-bin scripts. 

This flag is set on many distores by default. Edit /etc/sudoers, file, enter:

# visudo

Find line that read as follows:

Defaults    requiretty

Either comment it out the line or delete the line:

#Defaults    requiretty

Save and close the file.

Add MySQL database in Websitepanel and fix common errors

This article covers add MySQL database in Websitepanel and fix a common errors related to this task.

A database interface allows you to create and manage the existing MySQL databases. If you are creating a PHP based application or any application that uses a database, then you will need to create a database and a database user to access this database.


To create a MySQL database in WebsitePanel:

1. Click on the plan you want to add a MySQL database to.

2. Then click Databases.

3. Click MySQL.

4. Click Create Database.

5. Enter a name for your database.

6. Click Save.

7. You can click on the database you just added to edit it.

8. You can view existing users or delete or backup the database.

9. Click Save when you are finished


Success! You can view your added databases and see how many allowed databases you've used.

Encrypt email messages in Outlook - Follow this guide now

This article covers the different methods to encrypt email messages in Outlook: using certificates (S/Mime), Office 365 Message Encryption (OME), and using encryption add-ins.


To Encrypt a single message:

1. In message that you are composing, click File > Properties. 

2. Click Security Settings, and then select the Encrypt message contents and attachments check box. 

3. Compose your message, and then click Send.


In Outlook, All attachments are encrypted.

Recipients who access the encrypted email via the Office Message Encryption portal can view attachments in the browser.

Note that if the recipient of the file is using an Outlook.com account, they can open encrypted Office attachments on the Office apps for Windows.


To view an encrypted email in Outlook:

1. Select Read the message.

2. You'll be redirected to a page where you can sign in and receive a single-use code.

3. Check your email for the single-use code. Enter the code in the browser window, then select Continue to read your message.


To encrypt a message in Office 365:

1. Sign in with Global Admin credentials.

2. Click on Admin.

3. Click on Settings.

4. Click on Services & add-ins.

5. Click on Microsoft Azure Information Protection.

Create Keytab File for Kerberos Authentication in Active Directory

This article covers how to create keytab files for Kerberos. Active Directory uses Kerberos version 5 as authentication protocol in order to provide authentication between server and client. Kerberos protocol is built to protect authentication between server and client in an open network where other systems also connected.


The Kerberos Keytab file contains mappings between Kerberos Principal names and DES-encrypted keys that are derived from the password used to log into the Kerberos Key Distribution Center (KDC).


The Kerberos Keytab file contains mappings between Kerberos Principal names and DES-encrypted keys that are derived from the password used to log into the Kerberos Key Distribution Center (KDC).


The keytab is generated by running kadmin and issuing the ktadd command. If you generate the keytab file on another host, you need to get a copy of the keytab file onto the destination host ( trillium , in the above example) without sending it unencrypted over the network.


To Create a Kerberos principal and keytab files for each encryption type you use:

1. Log on as theKerberos administrator (Admin) and create a principal in the KDC.

You can use cluster-wide or host-based credentials.

The following is an example when cluster-wide credentials are used. It shows MIT Kerberos with admin/cluster1@EXAMPLE.COM as the Kerberos administrator principal:

bash-3.00$ kadmin -p admin@EXAMPLE.COM

kadmin: add_principal vemkd/cluster1@EXAMPLE.COM

Enter password for principal "vemkd/cluster1@EXAMPLE.COM": password

Re-enter password for principal "vemkd/cluster1@EXAMPLE.COM": passwordCopy code

If you do not create a VEMKD principal, the default value of vemkd/clustername@Kerberos_realm is used.

2. Obtain the key of the principal by running the subcommand getprinc principal_name.

3. Create the keytab files, using the ktutil command:

Create a keytab file for each encryption type you use by using the add_entry command.

For example, run ktutil: add_entry -password -p principal_name -k number -e encryption_type for each encryption type.

Add MySQL Service on WebsitePanel - Do it now

This article covers how to add MySQL service in websitepanel. 

WebsitePanel began as DotNetPanel, which its creators made only for the Windows web technology platform as a Windows hosting panel.


To add MySQL Service on WebsitePanel, follow the steps provided below:

1. Download the installation file from here. Choose to skip registration and start the download.

2. Run the .msi file to start the installation. Click “Next” when prompted.

3. Select the product to upgrade, then click “Next“.

4. Click “Execute” to apply the update.

5. Click “Next” to configure the product.

6. If you already have a database within your server, the installer will check and update your database. Type in the correct password and then press “Check“, then press “Next” when the connection is successful.

7. Click “Execute” to apply the configuration, then “Next” to finish this part of the installation.

8. Click “Next” to proceed.

9. The installation is completed, click “Finish” to continue.

10. This shows the product you have installed, you can close the installer here or click “Add…” to install additional products such as MySQL Server ver 5.7

11. Select the “CONFIGURATION” tab and click “Servers” from the drop-down list.

12. Next, click on “My Server“, scroll down and search for “MySQL 5” tab (since we have installed MySQL 5.5 by default).

13. Click on the small “Add” besides the “MySQL 5” tab to add MySQL service to WebsitePanel.

14. From the drop-down list, choose the version of MySQL that had been installed (MySQL Server 5.5 in our case), then click "Add Service".

15. You will see a message saying that installation of  MySQL Connector/Net is required, follow the instructions and download the installer.

16. Run the downloaded installer but DO NOT choose “Typical Installation“, choose “Custom Installation” instead and remove the entire “Web Providers” section from your installation as it will give a nasty error after installation. Proceed with the installation by clicking “Next” and then “Install“.

17. Return to the MySQL Service Properties page, fill in the password with the password used to login to MySQL root account and then click “Update” at the bottom of the page. If the password entered is correct, the MySQL service will be successfully added to the list of server services.

Virtuozzo VS Hyper-V - Which is better

This article covers some comparison between Virtuozzo VS Hyper-V. 

Hyper-V and Virtuozzo are both popular VPS platforms used by a large number of web hosting providers for the provisioning of Windows VPS hosting services, with Virtuozzo being favoured for Windows Server 2003 VPS hosting and Hyper-V being the most reliable solution for Windows Server 2008 VPS hosting services.


Advantages of using Virtuozzo over Hyper-V include:

1. Direct Linux support – Virtuozzo can be installed on their Windows or Linux VPS hosting nodes, and although Hyper-V can be used for the hosting of virtual machines running Linux it is only available for use on Windows Server 2008.

2. Web based control panel (Parallels Power Panel) – the Parallels Power Panel will allow users to manage their Linux or Windows VPS hosting server from a web based interface meaning that if they aren't in a situation where they can access their VPS server via Remote Desktop then they can use the Power Panel to restart their VPS server if necessary or to kill any services or processes which may be overloading their VPS server’s resources.

3. Separate application – the fact that Virtuozzo is a separate application which can be installed on top of the operating system can have its advantages in some cases, for example if a web hosting providers wishes to discontinue using a server for VPS server hosting then all they have to do is uninstall the application from their server, although in most cases it is advised to do an OS reload anyway to ensure that you have a blank canvas to start with.


Advantages of using Hyper-V over Virtuozzo:

1. Cost – with Virtuozzo VPS hosting web hosting providers have to pay for the cost of the Virtuozzo application and the cost of the operating system license, but because Hyper-V is part of the Windows Server 2008 operating system they will only need to pay for the operating system license – this can help to reduce the costs of Hyper-V VPS hosting services and as the cost of the operating system falls, prices will fall further and will eventually meet Virtuozzo Windows Server 2003 hosting services when it comes to price which will mean that people will gradually move over to using Windows Server 2008 VPS hosting.

2. Reliability – as Hyper-V is part of the Window Server 2008 operating system, web hosting providers are able to guarantee reliable Windows Server 2008 VPS server hosting services.

3. Native support for Windows Server 2008 – although Virtuozzo may have support for Windows Server 2008, it hasn’t been able to offer the most reliable of Windows Server 2008 VPS hosting services.

Unable to add bridge port vnet0 No such device - Fix it now ?

This article covers how to resolve the error, Unable to add bridge port vnet0: No such device which happens when the bridge device specified in the guest's (or domain’s) <interface> definition does not exist.

Theerror messages reveal that the bridge device specified in the guest's (or domain's) <interface> definition does not exist.

To verify the bridge device listed in the error message does not exist, use the shell command ifconfig br0.

A message similar to this confirms the host has no bridge by that name:

br0: error fetching interface information: Device not found

If this is the case, continue to the solution.


To fix the error, Unable to add bridge port vnet0: No such device :

1. Edit the existing bridge or create a new bridge with virsh

Use virsh to either edit the settings of an existing bridge or network, or to add the bridge device to the host system configuration.

2. Edit the existing bridge settings using virsh

Use virsh edit name_of_guest to change the <interface> definition to use a bridge or network that already exists.

For example, change type='bridge' to type='network', and <source bridge='br0'/> to <source network='default'/>.

Install PowerDNS and PowerAdmin on CentOS 7 - How to do it

This article covers the step by step procedure to install PowerDNS on CentOS 7. PowerDNS (pdns) is an open source DNS server written in C++ and released under GPL License. It has become a good alternative for the traditional DNS server Bind, designed with better performance and low memory requirements. 

PowerDNS provides two products, the Authoritative server, and the Recursor. 

The PowerDNS Authoritative server can be configured through the different backend, including the plain Bind zone files, RDBMS such as MySQL, PostgreSQL, SQLite3 or LDAP.


To Install PowerDNS on CentOS 7:

1. First let's start by ensuring your system is up-to-date:

$ yum clean all

$ yum -y update

2. Install PowerDNS and backend.

First, you need to enable EPEL repository and all required packages on your system:

$ yum install epel-release

$ yum install bind-utils pdns pdns-recursor pdns-backend-mysql mariadb mariadb-server

Enable PowerDNS on boot and start PowerDNS server:

$ systemctl enable mariadb

$ systemctl enable pdns

$ systemctl enable pdns-recursor

3. Configure MariaDB.

By default, MariaDB is not hardened. You can secure MariaDB using the mysql_secure_installation script. you should read and below each steps carefully which will set root password, remove anonymous users, disallow remote root login, and remove the test database and access to secure MariaDB:

mysql_secure_installation

4. Create PowerDNS Database and User in MariaDB.

Login as a MariaDB root and create a new database and tables:

### mysql -uroot -p

5. Configure PowerDNS.

Open the /etc/pdns/pdns.conf file.

Finally, restart the Power DNS service:

$ systemctl restart pdns.service

$systemctl enable pdns.service

6. Configure Recursor.

Open the /etc/pdns-recursor/recursor.conf file.

Add Remote Linux Host to Cacti for Monitoring - Do it now

This article covers how to add a #Linux host to #Cacti.

Basically, Cacti is a network #monitoring device that creates personalized graphs of server efficiency.

SNMP, short for Simple Network Management Protocol is a protocol used for gathering information about devices in a network. Using SNMP, you can poll metrics such as CPU utilization, memory usage, disk utilization, network bandwidth, and so on. 


To install snmp agent on Ubuntu, run the command:

$ sudo apt install snmp snmpd -y


To install #snmp agent On CentOS 8, run the command:

$ sudo dnf install net-snmp net-snmp-utils -y


SNMP starts automatically upon installation.

To confirm this, confirm the status by running:

$ sudo systemctl status snmpd

If the service is not running yet, start and enable it on boot as shown:

$ sudo systemctl start snmpd


To Add Remote Linux Host to Cacti for Monitoring:

1. Install SNMP service on Linux hosts. SNMP, short for Simple Network Management Protocol is a protocol used for gathering information about devices in a network.

2. Configuring SNMP service.

3. Configure the firewall rules for snmp.

4. Adding remote Linux host to Cacti.


To Install and Configure Cacti:

1. Cacti require few more dependencies, run the following command to install them:

yum -y install net-snmp rrdtool net-snmp-utils

2. As we have all the dependencies ready, we can now download the install package from Cacti website.

cd /var/www/html

wget http://www.cacti.net/downloads/cacti-1.1.10.tar.gz

3. You can always find the link to the latest version of the application on Cacti download page. Extract the archive using the following command.

tar xzvf cacti*.tar.gz

4. Rename your Cacti folder using:

mv cacti-1*/ cacti/

5. Now import the Cacti database by running the following command.

cd /var/www/html/cacti

mysql cacti_data < cacti.sql -u root -p

6. The above command will import the cacti.sql database into cacti_data using the user root. 

It will also ask you the password of root user before importing the database.

7. Now edit Cacti configuration by running the following command.

nano /var/www/html/cacti/include/config.php

8. Now find the following lines and edit them according to your MySQL database credentials.

/* make sure these values reflect your actual database/host/user/password */

$database_type     = 'mysql';

$database_default  = 'cacti_data';

$database_hostname = 'localhost';

$database_username = 'cacti_user';

$database_password = 'StrongPassword';

$database_port     = '3306';

$database_ssl      = false;