Run SQL Scripts- Automating Your Analysis

Of course, by now you should realize that you don’t have to re-create your SQL. You can use the Save function in Run SQL Scripts to save your work. What I suggest you do is put all of the SQL that you need to run regularly into a separate Run SQL Script file; then run each statement one at a time. When the results appear, right-click on the results and choose the Save Results… option to generate your report as an .xlsx, .csv, or whatever format meets your requirements. (You must first enable the Save option. See chapter 1 for instructions.)

SQL in a CL Program

While running established SQL statements regularly may work for you, some may need or want to run the SQL from a scheduled job. Unfortunately, both the regular and advanced job scheduler take only CL commands, not SQL. So how do you schedule SQL? One way you can accomplish it is to put your SQL in a CL program. You have two options for running SQL from a CL program: Run SQL (RUNSQL) and Run SQL Statement (RUNSQLSTM). To use RUNSQLSTM, put your SQL into a source physical file or a file in the IFS and then use RUNSQLSTM in your CL program to run the SQL defined in that source file. Here’s an example: https://www.ibm.com/support/pages/sample-cl-program-run-sql-scripts-using-runsqlstm

However, most people use RUNSQL, and that’s the command I’ll use for my example. The tricky part of adding SQL to a CL program is that you can’t use the SELECT statement, so you have to do a bit of sleight of hand to get the SELECT accomplished.

A few notes regarding this program:

  • First, this program is provided as is: no warranties, use at your own risk, and whatever words I need to use to make sure I’m not liable in case it fails or doesn’t work for you!
  • Now for some info that will actually be helpful…
    • All literals have to be qualified with two single quotes. Notice my WHERE clause; the *YES is surrounded with two sets of single quotes (those are not double quotes). Note: This issue will bite you if you have just copied your SQL from Run SQL Scripts because that interface uses one single quote to qualify literals.
    • I use a naming convention of *SQL (NAMING(*SQL)) so I can use the period (.) rather than a slash (/) to name objects—for example, QSYS2.USER_INFO.
    • I specify COMMIT(*NONE) so I don’t have to worry about uncommitted transactions being rolled back.
    • While this specific example works without it, in other CL programs running SQL, I’ve had to specifically set my CCSID to the language on my system to have the report correctly display the information.

For what I was working on when I wrote this program, I needed to create a spooled file (thus my use of the RUNQRY command). But you may want to send the contents of the file to a stream file and then download it to Excel. Or maybe you generate the spooled file as I did and then PDF it and email it off the system. My intent in providing this example is not to provide you with a total solution but to get you thinking about how you might make this work in your environment, using the utilities you already have. In addition, many other examples of adding SQL to a CL program exist on the Internet; you can learn from those. If you want a more-thorough explanation of RUNSQL, you can find it here: https://www.ibm.com/support/knowledgecenter/ssw_ibm_i_74/sqlp/rbafyrunsql.htm

Ensure All Connections Are Using Strong Encryption – Securing the Connection to IBM i

The other consideration for encryption is the strength of the algorithms in use. The system values that control the strength of encryption on the system are the QSSL* system values. I describe the three values and how they work together in chapters 3 and 10 of IBM i Security Administration and Compliance, Third Edition. IBM i 7.4 changed the default settings such that the default protocols implemented no longer enable weak protocols. Unfortunately, I’ve seen organizations blindly set the QSSLPCL system value back to include weak protocols (which in turn enables weak encryption algorithms) just because they want to make certain they don’t run into problems. Please resist the temptation to do that! You can easily determine whether weak protocols are in use on your system. The IBM Support page shown below provides instructions for going into Service Tools and enabling counters to count the number and type (SSLv3, TLS1.0, TLS1.1, etc.) of encrypted sessions. The instructions are buried toward the end of this page (https://www.ibm.com/support/pages/security-bulletin-vulnerability-sslv3-affects-ibm-i-cve-2014-3566), so here they are for your convenience:

  1. Access System Service Tools by using SST. Type STRSST and sign on with your Service Tools ID and then take the following menu options:
  2. 1 – Start a service tool.
  3. 4 – Display/Alter/Dump.
  4. 1 – Display/Alter storage.
  5. 2 – Licensed Internal Code (LIC) data.
  6. 14 – Advanced analysis. (You must page down to see this option.)
  7. Page down until you find the SSLCONFIG option. Type 1 (Select) next to the option and press Enter. You are now on the Specify Advanced Analysis Options window. The command shows as SSLCONFIG.

You now have several actions you can take. To see the entire list, enter -h and press Enter to display the options available.

For example, to start tracking the connections, issue the following option:

The count starts immediately. No need to stop/restart anything or to IPL.

Once the counter has been enabled for a period of time, go back into SST following the instructions above and type the following to display the results as shown in Figure 12.6.

Figure 12.6: Use the SSLCONFIG macro in SST to determine if weak protocols are being used.

Note that the counts will be reset with every IPL, and you’ll have to go back in and re-enable the counts.

To stop the counters, issue the following:

To reset the counts without having to IPL, issue the following:

If you find that weak protocols or ciphers are in use, this link provides the instructions for running a trace to determine which job(s) used them:
https://www.ibm.com/support/pages/how-determine-ssltls-protocol-and-cipher-suite-used-each-active-system-tls-connection-ibm-i

There’s really no excuse for blindly enabling weak ciphers by adding weak protocols back into QSSLPCL when it’s so easy to determine whether they’re required. I recommend that you go through this process to ensure your encrypted sessions are using strong encryption.

Be aware that some vendors, such as those supplying secure file transfer or database encryption, may use their own encryption configuration and not rely on the QSSL* system values. If you have software that provides any type of encryption, check with the vendor to ensure their product uses strong encryption and that the version you have installed has that support.

Securing New Nav

Finally, you’ll want to make sure that New Nav is using a secure connection: that is, https (not http). Several years ago, IBM stopped shipping a self-signed certificate with its browser interfaces due to the number of browsers that were either downgrading the connection from https to http or rejecting them altogether when a self-signed certificate was in use. IBM Support has provided instructions for enabling Navigator for i to use https. While the instructions have been upgraded to say that they can be used for New Nav, some of the instructions only reference Heritage Nav. Make sure that you are enabling SSL/TLS for the ADMIN1 HTTP admin server job, which is the server for New Nav. The examples also only show using a self-signed certificate. I recommend that if you have access to a well-known CA, you use a certificate issued by that CA rather than a self-signed certificate so the certificate won’t be rejected or downgraded by browsers.
https://www.ibm.com/support/pages/enabling-tls-ibm-navigator-i

NETSTAT – Securing the Connection to IBM i

One way to look at all connections is to use the Work with TCP/IP Network Status (WRKTCPSTS) command, aka NETSTAT. Choosing to look at the IPv4 and IPv6 connections will show you the established connections and which server they’re connected to. The server name will indicate whether the connection is secure, typically by adding either -s or -ssl to the end of the server name (e.g., Telnet-ssl). The port used is also an indication of whether the connection is secure. For example, if the Telnet session is using port 23, then it’s not secure. But if it’s coming in via port 992, the connection is encrypted. Here’s a list of the well-known ports:
https://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers

In addition, you’ll need the list of ports used by the ACS:
https://www.ibm.com/support/pages/tcpip-ports-required-ibm-i-access-and-related-functions

By analyzing the output of NETSTAT, you can determine whether there are currently any unsecured connections established to your IBM i. The problem with NETSTAT is that it’s a point in time; it’s accurate only for the time at which you ran the command. Other connections may be established in the middle of the night, for example.

The Audit Journal

To have a complete analysis of your connections, you’ll need to utilize the audit journal. You’ll need to add two values to the QAUDLVL system value: *NETTELSVR to audit Telnet connections and *NETSCK to get all other IP connections. (If you want to analyze UDP connections, you’ll need to add *NETUDP.) Once you’ve added these values, all connections will be logged. (Note that these values may generate a significant number of audit journal entries, so once your analysis is completed you may want to remove these additional values from QAUDLVL.)

To analyze the connections, you’ll need to gather the SK audit journal entries. The SK entries aren’t (yet) available as a service, so we’ll use CPYAUDJRNE to get the information:

There are many subtypes within the SK audit journal entry (for the full list, see the IBM i Security Reference manual, chapter 9), but the subtype we’re looking for is ‘A’, meaning that the connection has been accepted. So right away, I’m going to use the following SQL statement to filter the SK entries to just look for the accepted connections.

Even with this filter, you’re still likely to have a lot of entries. At this point, your next step depends on why you’re examining these entries. If you think you’ve gotten all connections secured, you may want to specifically look for connections coming in over unsecured ports. The following SQL lists the ports that the connections should be using; anything coming in over a different port will be listed. (Note: This is not a complete list of secure ports!)

If you are just starting out with this project, you’ll probably want to be more selective in the entries. For example, if you want to ensure all Telnet sessions are encrypted, you can use an SQL that will only include cleartext Telnet sessions (that is, connections coming in over port 23).

Now that you’ve selected the audit entries, how do you read/make sense of the information in the audit journal entry? For most audit journal entry types, I find the job name, user, and number helpful as well as the program and program library. But in this case, these fields are worthless. That’s because these entries are generated before the user’s job has started. See Figure 12.5.

Figure 12.5: SK entries are generated before the user’s job starts, so traditional fields are of little use.

Likewise, the User profile field that typically contains the name of the current user is also worthless in this audit journal entry as it will always contain the value *NONE. So what is helpful about this entry? You’ll want to focus on these two fields:

  • SKLPRT: Local port
  • SKRADR: Remote IP address

To decipher where the connection is coming from, you’ll have to do a reverse DNS lookup using the remote IP address. You may also find the timestamp (SKTSTP) field helpful if the connections are coming from remote servers and you have to look through scheduled jobs to find the incoming task.

Helpful Hints

As you start to play around with the SK entries, you may want to force a few entries to understand how to better read and understand what you’re seeing. (That’s what I always do when I start using an audit journal entry type that I’m not familiar with.) To force the entry, remember that SK entries log new connections; therefore, it’s not sufficient to simply log off your Telnet session and log back on. That will not generate an SK entry. You must close and reinitiate the session to get an SK entry generated.

Note that the *NETCMN value was reworked in IBM i 7.3 such that it no longer logs accepts and connects. If you have *NETCMN specified in QAUDLVL and think you’re going to get the audit journal entries I’ve been describing, you won’t. As I said earlier, you must specify *NETTELSVR and *NETSCK (and NETUDP if you want to analyze your UDP connections) in the QAUDLVL system value. It may take a bit of investigation, but analyzing the SK audit journal entries will help you know whether all connections are secure (encrypted) or not.

Encrypted Sessions – Securing the Connection to IBM i

I can’t emphasize enough the need to encrypt all sessions—not just communications out of your network, but internal communications as well. Why, you ask? Because if malware or an intruder makes its way into your network, all data (including user IDs and passwords flowing around your network in cleartext) will be read, especially the user IDs and passwords. They will be skimmed and used to gain access to servers around your network, harvest the data on those servers, and/or plant additional malware. Encrypting internal communications eliminates the chance of user IDs and passwords from being skimmed and data from being stolen in this manner.

How does one encrypt communications to your IBM i? Most servers are registered in Digital Certificate Manager (DCM). Web applications must first be configured to use SSL/TLS, and then they’re registered in DCM. You can define your IBM i to be a Certificate Authority (CA)—that is, the entity that creates digital certificates—but most organizations choose to get a certificate from a well-known CA or have an internal CA so that the certificates will be trusted by browsers. (If you choose to have your IBM i be your CA, you’ll have to download its CA certificate into the trust list of all users’ browsers to ensure the connections are encrypted—that is, use https. If not, some browsers will revert the connection to http when they don’t trust the issuing CA.)

If you’ve never considered configuring your connections to IBM i to be encrypted, using your IBM i as the CA is a good place to start. All of the software you need is already provided by the operating system and likely already installed. In other words, it will take no additional funds for you to give this a try. You can continue using your IBM i as a CA, but if you decide not to, you already know how to get things configured. You simply import the certificate into DCM and follow the same steps as you did to configure the servers when your IBM i was the CA.

DCM got a facelift a while ago, but many people missed that message. So with New Nav, the link it provides is the link to the new DCM interface. See Figure 12.4.

Figure 12.4: An easy way to launch DCM is through New Nav Bookmarks.

Unfortunately, IBM’s documentation hasn’t kept up with the new interface. That said, the link below provides very good step-by-step instructions for creating a CA on IBM i, requesting and importing a certificate from a well-known CA, and assigning it to servers and more. While the screenshots are the old interface, the terminology and flow in the new DCM is the same as the old, so you should be able to follow along. If you haven’t used DCM, I encourage you to use this resource to get familiar with this important part of configuring encrypted sessions:
https://www.ibm.com/support/pages/digital-certificate-manager-dcm-frequently-asked-questions-and-common-tasks

Ensuring All Communications Are Encrypted

It’s easy to determine whether my connection to IBM i is encrypted. If I’m using Access Client Solutions (ACS) to establish a Telnet session, all I have to do is look in the lower right corner of the session to see the port used, an open or closed padlock, and encryption strength if the connection is secure. For ODBC, I simply look for the job name to be QZDASSINIT (SS meaning secure) rather than QZDASOINIT (where SO is open or not encrypted). But if I’m an administrator and I want to prove to myself or am required to prove to an auditor that all connections are secure, how might I do that?

Discovering Who’s Using SSH – Securing the Connection to IBM i

Before you limit which profiles can use SSH, you may want to determine who’s already using it so you can either allow the access or make the conscious choice to prevent it. There’s no specific audit journal entry type for SSH access, but I’ve discovered that you can determine SSH access by examining either the GS entries (generated by specifying either *SECURITY or *SECSCKD in QAUDLVL) or the JS entries (generated by specifying *JOBDTA or *JOBBAS in QAUDLVL). I’m providing an example using the GS entries since it’s more likely you already have the GS entries. (More organizations have *SECURITY as a value in QAUDLVL than *JOBDTA or *JOBBAS.)

First, retrieve the GS entries:

Then get a list of entries including the timestamp, user, and IP address fields:

Whether you use the GS or JS entries, you’ll notice that there are multiple entries for a single connection. (The best way to understand this is to make an SSH connection yourself and look at the audit entries generated.) Unfortunately, the timestamp, user, and IP address are really the only useful information the audit journal provides for SSH, whether you are looking at the JS or GS entries. Obviously, actions taken, such as the creation or deletion of objects, will generate their respective audit journal entries, but examining the CD (Command) audit journal entries will not yield the commands entered via SSH. The only way to log the activity that occurs via SSH is to enable syslog. Enabling syslog is not what you’d call straightforward on IBM i. Here’s a link to an IBM Support document that describes the process: https://www.ibm.com/support/pages/syslog-syslogd-pase-ibm-i

SNMP

The other server I need to mention is Simple Network Management Protocol, better known as SNMP. This is the TCP/IP protocol that allows you to manage devices throughout your network, of which IBM i may be one. The problem with this protocol is that intruders can use it to map out your network, and the first two versions of this protocol make that quite easy to do. IBM i 7.5 builds on the enhancements in IBM i 7.4 to only allow SNMPv3. If you use SNMP in your network or you specifically want to prevent the SNMP agent from running on your system, see https://www.ibm.com/docs/en/i/7.5?topic=snmp-controlling-access for more details and recommendations.

Other Best-Practice Settings – Securing the Connection to IBM i

You may also want to change some of the default settings in your SSH configuration file. For example, I recommend that you do not let QSECOFR use SSH. (QSECOFR is a well-known profile. If someone is going to try to access your system, that’s a profile they know exists on all IBM i systems like root exists on all UNIX/Linux systems.) To disallow QSECOFR login, set PermitRootLogin to No.

Other configuration options to set:

  • Your organization (likely your lawyers) may require all logins to display a banner stating access is for management-approved uses only. Specify the path to the banner (Banner /your pathname) where the banner is located.
  • Only allow the more secure SSH2 protocol (Protocol 2).
  • Use Session idle time-out (ClientAliveInterval 300).
  • Session keep-alive number is the number of times users get a warning message that their session is about to time-out. If your time-out value is 300 and you set the keep-alive to 5, users will get a warning every 5 minutes, and if the session is idle, it will time-out after 25 minutes. It’s a bit like the QINACTITV and QINACTMSGQ system values. For SSH, it’s recommended that you just time-out the inactive session and don’t have a keep-alive (ClientAliveCountMax 0).
  • Set a max number of sign-on attempts (MaxAuthTries). Match your QMAXSIGN system value, typically 3-5.
  • Consider using public/private keys for authentication and disallowing the use of passwords on the connection. Here’s a link to the support page describing how to configure this: https://www.ibm.com/support/pages/configuring-ibm-i-ssh-sftp-and-scp-clients-use-public-key-authentication
  • Disallow host-based authentication, which is different from and even less secure than user ID/password authentication (HostbasedAuthentication no).

Finally, consider configuring chroot. chroot is a UNIX/Linux method of setting a process’s root directory to restrict what the process can access. You may also see it referred to as a “container” or “jail.” Containers are often used when implementing automated SSH and SFTP processes. You know that I’m all for multiple layers of defense, and I do appreciate using chroot to provide at least a speedbump in the road to limit what users of SSH can access. But, just as prisoners can tunnel under prison walls or find other means of escape, users can escape the confinements of chroot. I’m not saying not to use it; please do. Just don’t depend on it as your single method of securing SSH access. Also, if you’re set up to allow a specific group, don’t forget to add this group profile to the profiles you’ve set up in your chroot configuration. For more information on chroot and IBM i, see this support page: https://www.ibm.com/support/pages/using-chroot-ibm-i-restrict-ssh-sftp-and-scp-specific-directories

Controlling Who Can Use SSH – Securing the Connection to IBM i

One of the servers I am going to spend a bit of time on is Secure Shell (SSH) because it’s become widely used and SSH clients are readily available. If you’re not controlling what can be installed on your users’ desktops, an SSH client such as PuTTY can be download, installed, and in use in just minutes.

One of the popular features of most SSH clients is the file-transfer capability. The secure connection capabilities of SSH clients provide an alternative to unencrypted FTP connections. This feature may not seem like a security exposure and shouldn’t be if you’ve implemented object-level security. But if you’re like many IBM i organizations that haven’t taken this step or rely on exit-point technologies to secure your critical files, you’ve got an issue. (Note: IBM has not provided an exit point for the SSH daemon, and attempting to control access via the Sockets exit is tenuous at best.) SSH is a secure tunnel over which several different types of processes can flow. The fact that it’s multi-featured is one reason why I believe that IBM hasn’t provided an exit point specifically for SSH and why it’s often difficult to know how to block it, even at the firewall level.

If you’re not using SSH in your organization, make sure the autostart value is *NO. If you are using SSH, follow the instructions below to limit who can use it and the directories in which users can operate.

When you think about who should be able to use SSH, it’s typically limited to a handful of people, often just administrators. Controlling which profiles are allowed to connect via SSH is actually quite simple. It can be accomplished by adding one or more directives to the configuration file associated with the SSH daemon. The configuration file can be found at this path:

The directives available for controlling access are DenyUsers, AllowUsers, DenyGroups, and AllowGroups. Regardless of the order in which they’re defined in the config file, they are evaluated in the order I’ve listed. You don’t have to specify values for all of these directives. (In fact, it doesn’t work if you do!) If you only want the members of one group (e.g., GRPSSH) to be able to use SSH, you only need to specify AllowGroups grpssh. Everyone who is not a member of GRPSSH is denied access.

What seems to work well for my clients is to create a group specifically for SSH use. If a user needs to use SSH, put them in this group. The group can be specified as either their Group profile or as one of their Supplement group profiles. Do not overload this group—that is, do not use it for any purpose other than to control SSH. That way, there’s much less chance that you’ll have to exclude users in the configuration file—in other words, use the DenyUsers directive.

If you happen to have the situation where you have a group in which some but not all members of the group need SSH access, you can then go ahead and use the DenyUsers directive, specifying the names of the profiles that shouldn’t be allowed to use SSH. See Figure 12.3. Now, only members of the GRPSSH and SYSADMS groups will be allowed use of SSH, except for John, Joe, and Sam. All other users are also denied, just like John, Joe, and Sam are.

Figure 12.3: Allow or deny SSH access to groups or individuals.

You’ll want to take note of these items:

  • Do not use both AllowUsers and AllowGroups. You can use one or the other but not both.
  • The profile and group names must be specified in lowercase! The directive itself is not case-sensitive, but the values are and, counter to what one might think, it’s expecting the values to be in lowercase.
  • When specifying multiple profiles (or groups), separate each with a blank.
  • You must end and restart the server to have your configuration changes take effect.

TCP/IP Servers – Securing the Connection to IBM i

I’m not going to discuss the security considerations of each TCP/IP server; I did that in IBM i Security Administration and Compliance, Third Edition. What I want to encourage you to do is review the servers that you have autostarting and determine if that’s the right setting. If you aren’t using it, change the autostart value. For example, there are known exposures with the *REXEC server. Hopefully, it’s not in use, and therefore, I’d make sure the autostart value is *NO.

The easiest way I know to display (and change) the servers using autostart is to use New Nav. Click on the icon shown in Figure 12.1, then TCP/IP Configuration, then TCP/IP Configuration Properties.

Figure 12.1: Use New Nav to display the TCP/IP Configuration Properties, including the autostart value.

Figure 12.2 shows a portion of the window that’s displayed when you click on Servers to Start. To change the autostart attribute, simply check or uncheck the box in front of the server. It couldn’t be easier than that.

Figure 12.2: Click on Servers to Start to display or change the autostart value of the servers.

Exit Points

I alluded to exit points when I described using Function Usage to control access to FTP, ODBC, and DDM/DRDA. An exit point is a point defined by the operating system where a user-written program can be called.

Several types of exit points have been defined throughout IBM i. One is where the exit program is passed a predefined set of information and, based on that information, the exit program can send back to the operating system an indication as to whether the process should continue or fail.

Another type of exit is informational—that is, at the time the exit program is called, the operating system passes a predefined set of information; however, the exit has no say as to whether the action will take place. For example, there are exit points that have been defined when a profile is created, restored, and deleted. The exit program is passed the name of the profile and the action being taken. This provides the opportunity for the exit program to take action on its own, such as creating the user’s /home directory or adding them to the system distribution directory when the profile is created and then deleting those when the profile is deleted. But regardless of what the exit program does, the action on the profile will occur.

Exit points that have the ability to stop an action have been around for many years, and several vendors provide software that puts a user interface to these exits, allowing you to control access to many (but not all) connections to the system. Because there aren’t exits for every method of accessing the system, you will always hear me emphasizing the need to implement object-level security because it’s always in place. However, using exit points can provide an additional layer of defense if you need more “situational” access controls—for example, if you want to allow an ODBC connection only from a specific IP address and block all others. Or only allow FTP during business hours.

The other reason to use exit points is for the logging opportunities they provide. I’ve described how you can use the GR audit journal entries to determine when users are entering the system via FTP, ODBC, or DDM, but that’s about all the information you get. The audit journal doesn’t provide the name of the object being accessed or the SQL being run; however, an exit program can log that information, and you can have a more-detailed idea of what’s happening on your system by reviewing the logs associated with the transactions flowing through the exit program. Most organizations only use exit point software for its logging capabilities, but exit programs can be used to add an additional layer of security if you use the rules and block access. Some organizations have made exit programs an integral part of their security program.

To see the exit programs defined on the system, along with the programs assigned to those exit points, you can use Work with Registration Information (WRKREGINF), but the output is only to display or a spooled file. If you want to send it to a spreadsheet or to filter it differently than the command allows, you can use the QSYS2.EXIT_POINT_INFO to list the exit points available. From that view, you can see which exits have programs assigned and then use QSYS2.EXIT_PROGRAM_INFO to list the programs.

Listing the Function ID Settings – Implementing Function Usage (Application Administration)

If you want to have a different way of viewing these settings or need to print a report or send it to a spreadsheet or file, you have a few options. You can use the Work with Function Usage (WRKFCNUSG) or Display Function Usage (DSPFCNUSG) commands. But the output is display or spooled file only and the spooled file is not the easiest to read.

When you’re viewing the settings in New Nav, you can export the entries you’re currently viewing. Choose Actions > Export. Finally, you can always use the IBM i Service to get a list and then click on the results and send that to a spreadsheet. The IBM i Service is QSYS2.FUNCTION_INFO, and you can certainly write your own SQL to list the details using Run SQL Scripts. But if you have the entries you want but can’t export them all (New Nav can only export the rows you’re viewing, which may not be the full list of entries), go to the upper right corner and click on SQL. What’s displayed is the SQL used to select the entries you’re viewing. You can either run that SQL directly if you have configured ACS to do so, or you can copy and paste the SQL into Run SQL Scripts.

Using the Audit Journal to Detect Access

As I’ve explained in earlier chapters, when any function ID is changed (either default access or you allow a profile or group access), the operating system starts to generate a GR audit journal entry whenever one of these function IDs is checked. When are they checked? When I sign in to New Nav, all of the function IDs associated with New Nav are checked to see which categories I’m allowed to use. When I make an ODBC connection, the QIBM_DB_ZDA function is checked to determine if my profile is allowed to make the connection. Each check results in a GR audit journal entry. These entries give you the ability to determine who’s accessing the system via FTP, ODBC, and DDM/DRDA, which is incredibly helpful when trying to determine how to secure objects. See chapter 9 for a more-detailed explanation of using the GR audit journal entries.

You Need to Do More Than Just Control New Nav and ACS to Secure Your Data

I started this chapter saying that Function Usage was one way to implement multiple layers of defense. Please make sure that it’s not the only defense you take to secure your system. Using Function Usage and other forms of limiting access to Navigator for i and ACS functions is basically the same as implementing “menu security” in a green-screen environment. Assuming that limiting users’ access to the user interface is protection enough is turning a blind eye to the fact that there are many ways to access IBM i data, not just Navigator for i and ACS. If you want to ensure your data is secure, you must secure your data with object-level security!

Controlling Access to ACS Features- Implementing Function Usage (Application Administration)

To find the functions defined that allow you to control ACS, scroll over to the Category column and filter using “ACS.” Use the same techniques as the ones I described for controlling access to Navigator for i. The only difference is that there’s not one function that shuts off access to all features.

Controlling Access to IBM i Features

The third category of functions defined in Function Usage are those that are checked before a function is performed on IBM i itself. There’s no way to adequately describe all of the features because they span a wide range of functionality. The best I can do is describe several of the ones my clients use.

To get a list of all functions controlling IBM i functions, scroll to the Category column and filter using Host. One of my favorite functions is shown in Figure 11.4. I like this function because it eliminates one of the excuses programmers use when attempting to justify their need to have *ALLOBJ on production systems. IBM i prohibits users without *ALLOBJ from viewing the joblog of jobs running with a profile that has been assigned *ALLOBJ. This makes sense except that production jobs that run out of a job scheduler often run under a profile that has *ALLOBJ. If a developer is on call and one of these jobs fails, the developer will be prevented from viewing the joblog of the failed job…unless they’ve been given access to the QIBM_ACCESS_ALLOBJ_JOBLOG function and they have *JOBCTL special authority. It’s rare that developers on call don’t have *JOBCTL; that, combined with having access to this function, provides them with the ability to continue to debug these jobs, without having *ALLOBJ themselves. Simply specify the developer’s group profile in the Profile(s) field and click Add under the Access Allowed column.

Figure 11.4: Allow individuals and groups with *JOBCTL to access *ALLOBJ joblogs.

My other three faves control access to the system via DDM/DRDA (QIBM_DB_DDMDRDA function), ODBC (QIBM_DB_ZDA), and FTP (QIBM_QTMF* functions). Think of these functions as on/off switches for DDM/DRDA, ODBC, and FTP. Either users can access the system via these protocols or they can’t. There’s no filtering on the object they’re trying to access or controlling by time of day or originating IP address. If you need to be that granular with your access controls or you need to log the details of what’s being accessed or the SQL being used on the request, then you need to use an exit program. But if you’re simply trying to shut off access via these protocols, these functions do the trick. You have the same ability to deny access by default, use or not use *ALLOBJ to allow access, and list specific users/groups to allow or deny access. I’ve had clients that knew exactly which profiles should be using ODBC, for example. They used the QIBM_DB_ZDA function, changed the default access to Denied, allowed *ALLOBJ (because users with *ALLOBJ are tightly controlled in their environment), and listed the two other profiles allowed to use ODBC. For them, this provided the control they needed.