ALU

database management

3 Reasons to Move Your On-Premise Data Architecture to the Cloud

Most companies only use 5 to 10 percent of the data they collect. So estimates Beatriz Sanz Sai, a 20-year veteran in advanced analytics and the head of Ernst and Young’s global data and analytics practice. While it’s impossible to validate such a claim, the fact is many organizations are gathering lots of data but analyzing little of it.

Legacy database management systems shoulder much of the blame for this. They hog time and resources for the sake of storing, managing, and preparing data and thus impede analytics.

Original Link

Managing High Availability in PostgreSQL: Part I

Managing high availability in your PostgreSQL hosting is very important to ensuring your clusters maintain exceptional uptime and strong operational performance so your data is always available to your application. In an earlier post, we introduced you to configuring high availability for PostgreSQL using streaming replication, and now we’re going to show you how to best manage PostgreSQL high availability.

There are multiple tools available for managing the high availability of your PostgreSQL clusters using streaming replication. These solutions offer automatic failover capabilities, monitoring, replication, and other useful administrative tasks. Some of the prominent open source solutions include:

Original Link

PostgreSQL Trends: Metrics and Time-Consuming Tasks

PostgreSQL, the fourth most popular database and DBMS of the Year in 2017, has exploded in popularity amongst the development and database communities across the world. Stealing market share from leaders Oracle, MySQL, and Microsoft SQL Server, PostgreSQL hosting is also highly leveraged by new businesses in exciting spaces like IoT, e-commerce, SaaS, analytics, and more.

So What’s Trending in PostgreSQL Management?

We attended PostgresOpen in San Francisco last month to uncover the latest trends from the experts themselves.

Original Link

You Should Always Use a Custom DB Parameter Group When Creating an RDS Instance — Here’s How

Before spinning up a new Amazon RDS instance, it’s critical to first make sure you have a new custom DB parameter group ready to use with it. If you don’t, it might become necessary to perform an RDS instance restart to replace the default DB parameter group, even though the database parameter you’d like to change is modifiable and dynamic.

For AWS RDS instances, you manage your database engine configuration through the use of parameters in a DB parameter group. DB parameter groups act as a container for engine configuration values that are applied to one or more DB instances.

A default DB parameter group is created if you make a database instance without specifying a custom DB parameter group. This default group contains database engine defaults and Amazon RDS system defaults based on the engine, compute class, and allocated storage of the instance.

Now, let’s say you need to modify the AUTOCOMMIT DB parameter of your MySQL 5.6 RDS instance. Because it’s modifiable and dynamic, it seems like you can just go ahead and change it, and it’ll just take effect, right?

Image title

Not so fast. You cannot just modify the parameter settings of a default DB parameter group — you must create your own DB parameter group to change parameter settings from their default value.

For example, this is what happens if you go to the default MySQL 5.6 parameter group and click the “Edit Parameters” button.

Image title

Image title

Image title

Image title

Image title

With the intention of replacing the default DB parameter group, you proceed to modify the RDS instance and select to apply the changes immediately.

Image title

However, this doesn’t mean you’re done. Your RDS instance will now wait for the next reboot before making the DB parameter group replacement effective, even though you just told it to “apply immediately.” The parameter group will show “applying” and then “pending-reboot,” which will clear after you bounce the instance.

Image title

The superior method here is to create a new DB Parameter Group before creating the RDS instance in anticipation of your future needs. Even though you didn’t plan on changing any of the DB parameters at the time, preparing a custom DB parameter group for any new RDS instance provides the flexibility to rapidly realize necessary modifications down the road.

Image title

And there you have it!

Ron Eickler is Manager, Cloud Engineering at Stratalux, a consulting and managed services provider for Amazon Web Services (AWS).

Original Link

The Top 6 Free Redis Memory Analysis Tools

Image title

When it comes to analyzing the memory usage of a Redis instance, there are lots of free and open-source tools in the market, along with a smattering of paid products. Some of the most popular ones are Jacks (of all trades fame), but if you’re looking for a deeper analysis of your memory problems, you might be better off with one of the more targeted, and lesser-known tools.

In this post, we’ve compiled a list of the top 6 free tools we found most useful in analyzing memory usage of our Redis instances:

  1. Redis Memory Analyzer (RMA)
  2. Redis Sampler
  3. RDB Tools
  4. Redis-Audit
  5. Redis Toolkit
  6. Harvest

Help section of RMA

Redis Memory Analyzer (RMA) is one of the most comprehensive FOSS memory analyzers available for Redis. It supports three different levels of details:

  • Global — Overview of memory usage information.
  • Scanner — Highest level keyspace/prefix level memory usage information — in other words, the shortest common prefix is used.
  • RAM — Lowest level keyspace/prefix — in other words, the longest common prefix is used.

Each mode has its own uses — you can get further details in the RMA ReadMe.

RMA — Global Mode

Output from RMA in global mode

In the global mode, RMA provides some high-level statistics, like the number of keys, system memory, resident set size, keyspace size, etc. A unique feature is the “keyspace overhead,” which is the memory used by the Redis system to store keyspace-related information, like pointers for the list data structures.

RMA — Scanner Mode

Output from RMA in scanner mode

In the scanner mode, we get an overview of our keyspace. It gives the high-level namespaces (so a:b:1 and a:c:1 are clubbed together as a:*) along with the types of its items and the percentage of memory consumed by that namespace. It’s useful to start with this information and then use the “RAM” behavior coupled with the namespace pattern to do a detailed analysis.

RMA — RAM Mode

Output from RMA in ram mode

In the RAM mode, we get keyspace-level memory consumption, as provided by most of the other FOSS memory analyzers. So here, a:b:1 and a:c:1 are taken separately as a:b:* and a:c:*, and we get detailed information about memory used, actual data size, overhead, encoding, min and max TTL, etc. This helps pinpoint the biggest memory-hoggers in our system.

Unfortunately, this tool is not always updated (the last commit on GitHub is more than a year ago). Even so, this is one of the best we found for detailed analysis.

RMA Installation and Usage:

RMA requires Python and PIP to be installed on the system (both are available for all major OSes). Once they’re installed, you can execute a single command to install the RDB tools — “pip install rma”

It’s pretty simple to use from the command line. The syntax is “rma [-s HOST] [-p PORT] [-a PASSWORD] [-d DB] [-m pattern-to-match] [-l number-of-keys-to-scan] [-b BEHAVIOUR] [-t comma-separated-list-of-data-types-to-scan]”

RMA Pros:

  • Works in real time.
  • Uses the scan command to walk through the database, hence, the performance impact is limited and analysis is very accurate.
  • Well-documented — it’s easy to find usage examples.
  • Supports robust customization and filtering options including analyzing specific data types only, or considering only keys matching a specific pattern).
  • Can provide details at different levels — namespaces, keys, or global values.
  • Unique amongst all the tools we audited in that it shows the data structure overhead (that is, how much memory is used to store internal Redis information like the pointers for a list data type).

RMA Cons:

  • Does not support probabilistic sampling. Linearly scanning the database can be very slow for large databases; there is the option to stop scanning once a certain number of keys are returned to improve performance.
  • There’s a lot of details in the output; while helpful for experts, it might serve only to confuse novices.

2) Redis Sampler

The help section of RDB Sampler

Redis Sampler is a very powerful tool that can give deep insights into the memory usage of a Redis instance. It’s maintained by antirez, the developer behind Redis, and that deep knowledge of Redis shows through in this tool. The tool is not updated very frequently, but there are not many issues reported anyway.

Output from Redis Sampler

Redis Sampler does a probabilistic scan of the database and reports the following information:

  • The percentage distribution of keys amongst various data types — based on the number of keys, rather than the size of objects.
  • The largest keys of type string, based on strlen, and the percentage of memory they consume.
  • For all other data types, the largest keys are calculated and displayed as two separate lists: one based on the size of the object, and another based on the number of items in the object.
  • For every data type, it also shows a “Power of 2 distribution.” This is really useful in understanding the size distribution within a data type. The output basically details what percentage of keys of a given type are of size in the range > 2^x and <= 2^x+1.

Redis Sampler Installation and Usage:

This is a single Ruby script. It requires Ruby to be already installed. You also need “rubygems” and “redis” gems to be installed. Usage is pretty straightforward — from the command line, execute “./redis-sampler.rb”

Redis Sampler Pros:

  • Very simple to use — no options to investigate and understand.
  • The output is easy to understand, even for novices, but has enough information for very detailed analyses of a Redis instance by experts. The sections are clearly demarcated and easy to filter out.
  • Works on all Redis versions.
  • Does not use any privileged commands like DEBUG OBJECT, so it can be used on any system, including Amazon’s ElastiCache.
  • It uses data type-specific length commands to identify data size, so the usage reported is not affected by serialization.
  • Works on live data. Though the recommendation is to run on loopback interface, it supports sampling remote systems.

Redis Sampler Cons:

  • If the sample size is set higher than database cardinality, it will still use RANDOMKEYS rather than SCAN.
  • No bundle or Docker image available. You have to manually install dependencies (though on the bright side, there are only 2 dependencies).
  • Reports the data size, which does not exactly match the space occupied on RAM because of data structure storage overheads.
  • Does not work out-of-the-box if your Redis instance requires authentication. You need to modify the script to take a password; in simplest form you can search for: redis = Redis.new(:host => ARGV[0], :port => ARGV[1].to_i, :db => ARGV[2].to_i) and change it to: redis = Redis.new(:host => ARGV[0], :port => ARGV[1].to_i, :db => ARGV[2].to_i, :password => “add-your-password-here”)

Help section of RDB

RDB Tools is a very useful suite of tools for any serious Redis administrator. There is a tool for almost every use-case we could think of, but in this post, we’ll concentrate solely on the memory analysis tool. While nowhere near as comprehensive as RMA or Redis Sampler, RDB Tools gives 3 important pieces of information:

RDB Output showing all keys with size greater than 500 bytes

1) All keys where value has (serialized) size greater than B bytes [B specified by user].

2 largest keys as shown via RDB

2) The largest N keys [N specified by user].

RDB output showing memory occupied by a particular key

3) Size of a particular key: this is read live from the database.

This suite has many active contributors on GitHub and is pretty frequently updated. RDB Tools is also well-documented on the internet. The maintainer, Sripathi Krishnan, is well known in the Redis community for the many tools he has provided over the years.

RDB Tools Installation and Usage:

RDB Tools requires Python and PIP to be installed on the system (both are available for all major OSes). Once they’re installed, you can execute a single command to install the RDB tools — “pip install rdbtools python-lz”

Usage is pretty straightforward:

  • To get 200 largest keys: rdb -c memory /var/redis/6379/dump.rdb –largest 200 -f memory.csv
  • To get all keys greater than 128 bytes: rdb -c memory /var/redis/6379/dump.rdb –bytes 128 -f memory.csv
  • To get size of a key: redis-memory-for-key -s localhost -p 6379 -a mypassword person:1

RDB Tools Pros:

  • Outputs a CSV file, which can be used with other FOSS tools to easily create data visualizations, and can also be imported into RDBMS-es to run analytics.
  • Very well documented.
  • Supports customization and filtering options, so you can get more useful reports.

RDB Tools Cons:

  • Their analysis does not work on live data; you must take an RDB dump. As a result, memory usage reported is the serialized memory, which is not exactly the same as memory occupied on RAM.
  • It does not have any built-in support for grouping, so it cannot find the largest namespaces.

Help page from RDB Audit

Redis-Audit is a probabilistic tool that’s useful for getting a quick overview of your memory usage. It outputs useful information on key groups, like overall memory consumption, maximum TTL in the group, average last access time, the percentage of keys in the group that expire, etc. This is the perfect tool if you need to find the most memory-hogging key group in your application.Redis-Audit Installation and Usage:

Redis-Audit Installation and Usage:

Output from RDB Audit

You must have Ruby and Bundle already installed. Once installed, you can either clone the Redis-Audit repository to a folder or download the zip and unzip it to a folder. From that folder, run “bundle install” to complete the installation.

Usage is pretty straightforward: from the command line, “execute”redis sampler part 3redis-audit.rb hostname [port] [password] [dbnum] [sample_size]

Redis-Audit Pros:

  • Allows you to define your own regex for keyspace/prefix grouping.
  • Works on all Redis versions.The typewise grouped information from RDB Audit
  • If the sample size is larger than the actual number of keys, it walks through all keys. On the flip side, this operation uses keys *, rather than scan — possibly blocking other operations.

Redis-Audit Cons:

  • Uses DEBUG OBJECT command (unavailable in ElastiCache); as a result, it reports on serialized size, which differs from actual size occupied on RAM.
  • The output is not very easy to parse quickly, as it’s not tabularized.

Help section of Redis Toolkit

Redis Toolkit is a bare-bones monitoring solution that can be used to analyze two key metrics: hit rate and memory consumption. The project is periodically updated for bug fixes but does not have the community support some of the more famous tools enjoy.

Redis Toolkit Installation and Usage:

Report from Redis Toolkit

You must have Docker installed on your system. Then, clone the GitHub repository (or download as zip and unzip to a folder). From that folder, installation is as simple as executing “./redis-toolkit install.”

Usage is purely via command line, through a series of simple commands.

  • To start monitoring hit-rate: ./redis-toolkit monitor
  • To report hit-rate: ./redis-toolkit report -name NAME -type hitrate
  • To stop monitoring hit-rate: ./redis-toolkit stop
  • To create the dump file on local system: ./redis-toolkit dump
  • To report memory usage: ./redis-toolkit report -type memory -name NAME

Redis Toolkit Pros:

  • Easy-to-understand interface that gives you the exact information you require.
  • Can group prefixes to whatever level is useful for you (so if you choose a:b:1 and a:c:1, they are counted as a:* or separately).
  • Works on all Redis versions; does not require access to privileged commands like DEBUG OBJECT.
  • Well-documented.

Redis Toolkit Cons:

  • The memory analysis is not live; since it works on the serialized dump, the memory usage reported will not be equal to actual RAM consumption.
  • A dump has to be created on the computer where Redis Toolkit is running. If you have a remote Redis instance, this can take a while.
  • Monitoring the hit rate uses the MONITOR command to capture all commands that have run on the server. This can degrade performance and is a possible security risk in production.
  • The hit rate is calculated as |GET| / (|GET| + |SET|). So if a value changes often, its hit rate will be lower, even if there never was an actual cache miss.

Output of Harvest Tool

This is a probabilistic sampling tool that can be used to identify the 10 largest namespaces/prefixes in terms of the number of keys. It’s one of the newest tools and has not seen much traction on GitHub. However, if you’re a Redis novice looking to identify what kind of application data is clogging up your instance, you can’t get anything simpler than Harvest.

Harvest Installation and Usage:

This is downloadable as a Docker image. Once the image is ready, you can run the tool using the command “docker run –link redis:redis -it –rm 31z4/harvest redis://redis-URL” from the CLI.

Harvest Pros:

  • Works on live data.
  • Uses the “memory usage” command to get size information; hence:
    • Gives accurate size information (rather than serialized size).
    • Does not require access to the DEBUG OBJECT command.
  • Your namespaces need not be : (colon) delineated. Harvest identifies common prefixes rather than depending on regex-based namespace recognition.

Harvest Cons:

  • It’s a one-trick pony — difficult to adapt it for any other use-case.
  • The tool works only with Redis v4.0 and above.
  • Minimal documentation.

Free Tool Limitations

While we found these tools very useful for debugging our Redis instances’ memory issues, you should be aware of the limitations of these free tools.

The paid tools almost always have some sort of data visualization, which is not available out-of-the-box with any of the tools we audited. The best you’ll get is a CSV output that you can use other FOSS tools to visualize, and many tools don’t even have that option. That makes for a steep learning curve, especially for novice Redis users. If you’re likely to do memory analysis often, it might be worthwhile to look at paid tools that provide good visualization.

Another limitation is the ability to store historical information. In keeping with the general *nix philosophy of making small tools that do just one thing, but do it well, the tools rarely venture into the monitoring space. Not even a memory consumption graph over time, and many can’t even analyze live data.

The Bottom Line

One single tool will probably not suffice for all of your needs, but they’re pretty great weapons to have in your arsenal, paired with the monitoring capabilities provided by DBaaS services like ScaleGrid’s Redis hosting! To learn more about the great tools available with fully managed Redis hosting, check our Redis Features by Plan.

Original Link

How to Manage Your MySQL Databases

SQLyog is a professional-grade GUI tool for the MySQL RDBMS. SQLyog is available for the Windows operating system starting from Windows XP/Windows 2003 to Windows 8/Server 2008 R2. It will also work under Linux, Unix, and Mac OS X using the Wine environment. Moreover, a subset of SQLyog Enterprise/Ultimate functionalities is available with the free SQLyog Job Agent (SJA) for Linux as a native Linux utility. This makes it possible to specify and test “scheduled jobs” on a Windows environment and port execution parameters seamlessly to a Linux environment.

Why Use a Database Administration Tool

Database administration tools provide a GUI/web interface to automate database tasks like querying tables, find and replace operations, and any other task that you might want to run on a database. With respect to MySQL, it offers the free MySQL Workbench. Although it’s far more visually appealing than the command line interface, performance is less than stellar when under a high workload. As such, many users have reported that MySQL Workbench hangs or fails to respond. Frustration has led DBAs to far superior paid solutions that not only deliver more robust operations but also come with various useful features to help you get the most out of your DBA tasks.

Throughout the remainder of this blog, we’ll explore how SQLyog helps boost your productivity via some of its unique and value-adding features.

Session Restore

Have you ever experienced the frustration of your system crashing, or of accidentally closing your database administration application and losing all your unsaved tabs and queries? 

With Session Restore, you can restore your previous session along with all the query tabs, query builder tabs, schema designer tabs, and the history tab just the way they were prior to the crash.

SQLyog will always restore your previous session on startup by default. However, you can turn off this behavior from the Others tab of the main Preferences dialog. It’s accessible by selecting Tools > Preferences from the main menu.

The Restore session on startup option:

Session Savepoints

SQLyog also has the ability to save Session Savepoints and open connections from a save point to the same state as when the save point was generated.

As in the automatic restoring of the previous session on startup, the term “session” encompasses all open connections as well as all Query, Query Builder, and Schema Designer tabs opened for every connection, as well as the History tab.

The Session Savepoint commands are located on the File menu:

To save the current session, select either Save Session… or Save Session As… from the File menu. Save Session… will save to the current active session (thereby overwriting it) whereas Save Session As… will open the Save As dialog for you to choose your own file name.

Session information is written to a file with a .ysav extension. SQLyog employs a SQLite database to keep track of the last saved file’s location.

Automatic and manually saved Session Savepoints can be employed simultaneously. If automatic session restoration is enabled, the last active session will be loaded on startup. Then, any saved Session Savepoints can be accessed via the Open Session Savepoint… command.

Once a saved Session Savepoint has been opened, it may be manually closed by choosing the End Session command from the File menu. That causes SQLyog to stop tracking activity for that session. It will, however, continue to track the session for automatic session restoration if that option is activated.

If, for instance, you are a consultant working with many clients, this is a highly useful feature. You can have many savepoints for each client and get access to all the connections and scripts with a single click. And you won’t confuse what belongs to Client A with what belongs to Client B.

Autocomplete

Autocomplete, or word completion, is a software feature where the application predicts the rest of the word a user is typing. It’s one of the most coveted features in both development and administration software for a couple of very good reasons:

  1. It speeds up human-computer interactions by providing one or more words that the user might intend to enter after only a few characters have been typed into a text input field.

  2. It reduces typos by inserting the exact text of the object that the user wishes to access.

With respect to database administration software, the purpose of the autocomplete feature is to facilitate the insertion of database tables and columns names as well as MySQL keywords and functions into queries and DDL statements.

Available in SQLyog Professional, Enterprise, and Ultimate editions, the Autocomplete feature is made possible by the use of a small database engine that is built right into SQLyog. It contains the names of MySQL keywords and functions and for every connection that it uses. It also maintains a list of the object names for each connection for fast retrieval.

In addition to the two general benefits of Autocompletion stated above, there are very real and tangible benefits when dealing with long and/or awkward object names that are often found in RDBMSes. By reducing the possibility of misspellings, there is far less risk of accidentally executing incorrect queries and reaching erroneous conclusions.

Autocomplete basically works in four independent and differing ways.

1. Auto-Popup or Tooltip

Whenever you type an object name followed by a period (.), a list of available child objects will pop up in a small window. You can then select an item from the list by using the up and down arrow keys to move the selection highlighting and then pressing the tab or enter key to select the item.

For instance, try typing a database name like “sakila.” to bring up the list of tables within the Sakila database:

Moreover, once you’ve selected a table, entering another period (.) will bring up the popup list once again with a list of columns within that table:

The Autocomplete feature also works with functions. For example, typing the letters “sub” followed by the ctrl + space key combination will bring up the popup list with functions that match your typed letters:

Typing the name of a function followed by an opening parenthesis “(“ will display the parameter list of that function. Continuing with our previous example, typing “substr(“ will bring up the parameters for the substr() function:

In this case, there are actually four signatures for the substr() function, each with a different parameter list. You can iterate through each of these using the up and down arrows. As you do so, the parameters will change.

The current parameter is highlighted in red.

While writing a routine call the parameter list will highlight the current parameter in red.

The Ctrl + Shift + Space Shortcut

If, for whatever reason, the parameter list goes away, such as when invoking a different autocomplete feature, the ctrl + shift + space keyboard shortcut will always display the parameter list of a routine when the cursor is positioned inside it.

2. Tab Key Functionality

Highlighting a database in the Object Browser and typing in one or more characters in the editor pane will activate the tab key to act as an object iterator. For every press of the tab key, the autocomplete will iterate over all possible inputs that start with that character combination. Inputs are a combination of functions, keywords, and database objects from the selected database.

For example, say that the film table of the Sakila database is selected in the Object Browser. After typing in a value of “fil” in the editor, pressing the tab key once will select the first matching word — in this case, “film”. Pressing the tab key a second time will change the word to the next matching word — in this instance, film_actor. Once the desired word is present on the screen, simply continue typing from there.

The tab key autocomplete functionality in action:

3. Ctrl + Enter Functionality

Similar to the tab key functionality, pressing the ctrl + enter key combination after typing one or more characters in the editor pane will bring up a small window containing a list of keywords and functions starting with that character combination. You can then use the arrow keys to navigate to the one you want. Pressing enter inserts the selected entry.

4. Ctrl + Space Functionality

Whenever the cursor is placed in the editor pane, pressing ctrl + enter always opens a small window with a list of all possible keywords and functions. As above, use the up and down arrow keys to select an item and then press enter to insert the selected item.

Using Autocomplete With Aliases

Since version 6.5, Autocomplete supports table aliases and column aliases:

In the above image, you will see that all Autocomplete functionalities identify “f” as an alias of the film table.

Autocomplete supports both table and column aliases in SELECT, UPDATE, and DELETE statements except in the following cases, which are not supported at this time:

  • SELECT statements used to specify a value for INSERT such as:

    INSERT INTO mytable (col1) values ((SELECT...))
    
  • SELECT statements used as an argument to a function such as:
    "SELECT IF((SELECT....),'true','false')"
    

Column-Width Persistence

The Result tab displays the data from SELECT queries in a Grid view by default. It formats the results in Excel-like rows and columns. (Form and Text views are also available.) The Grid view offers a few benefits, such as sorting (clicking on the header sorts the results by that column). This sorting is done on the client side without sending any additional queries to the MySQL server.

The Grid view offers another advantage.

With most other GUI tools using a data grid, you will need to spend time to adjust the width of column headers every time you open the program. In SQLyog, column widths are persistent across sessions and even across master/slave replicas of the same database.

The Result grid:

Conclusion

In this blog, we explored the many advantages offered by database administration tools — in particular, those provided by SQLyog. Its unique features include:

  1. Session Restore, which recalls your previous session along with all the Query tabs, Query Builder tabs, Schema Designer tabs, and the History tab just the way they were before a sudden outage or application shut down.

  2. Session Savepoints allow for the manual saving and restoring of sessions to any number of save point states.

  3. Available in SQLyog Professional, Enterprise, and Ultimate editions, the Autocomplete feature displays suggestions for table, view, column, and routine names, along with their parameters.
    It reduces the possibility of misspellings, so there is far less risk of accidentally executing incorrect queries and reaching erroneous conclusions.

Autocomplete works in four ways:

  1. Whenever you type an object name followed by a period (.), a list of available child objects will pop up in a small window.

  2. Highlighting a database in the Object Browser and typing in one or more characters in the editor pane will activate the tab key to act as an object iterator. For every press of the tab key, the autocomplete to iterate over all possible inputs that start with that character combination. Inputs are a combination of functions, keywords, and database objects from the selected database.

  3. Similar to the tab key functionality, pressing the ctrl + enter key combination after typing one or more characters in the editor pane will bring up a small window containing a list of keywords and functions starting with that character combination.

  4. Whenever the cursor is placed in the editor pane, pressing ctrl + space always opens a small window with a list of all possible keywords and functions.

Since version 6.5 Autocomplete supports table aliases and column aliases.

  • With most other GUI tools using a data grid, you will need to spend time to adjust the width of column headers every time you open the program. In SQLyog, column widths are persistent across session and even across master/slave replicas of the same database.

Original Link

NuoDB: Index Management and Performance Improvements

Everyone likes to save time, right? After all, it even feels good to gain time back. You can do something more important or fun with that new-found time or just get to the next part of your day a bit sooner. We like our little time savers — like pre-ground coffee, automatic garage door openers, or even that magic button on your refrigerator that, when pressed, deposits ice in your drink!

Well at NuoDB, we realize the DBA has a number of tasks that periodically need to be performed throughout the day, week or month, and that some of those tasks can be automated to gain back valuable time. In fact, NuoDB has always had maintenance-free indexes. This saves our customers time by not requiring DBAs to schedule a special time to run index rebuilds to rebalance and reorganize indexes. Yep, that’s right: NuoDB automatically rebuilds indexes as needed in the background, making indexes rebuilds a thing of the past.

In NuoDB 3.0, we continue this great tradition of saving our DBAs more time to do other things by eliminating that all-too-common task of scheduling jobs to refresh index statistics. NuoDB’s new index processing introduced in v3.0 refreshes indexes stats automatically. While your NuoDB database runs, and you are accomplishing other important tasks, NuoDB is busy ensuring that all your user indexes remain up to date with current index statistics.

So Why Is Refreshing Indexes Statistics So Important?

After all, this regular task of refreshing index statistics is an accepted best practice that has been passed down generationally from DBA to DBA for decades now. In real estate, the three foundational pillars are well-known and accepted to be location, location, location. For database, it’s performance, performance, performance! We always want the best-possible performance we can squeeze out the database system to keep our application users happy! And refreshing indexes statistics by running the ANALYZE TABLE|Index command is one of those tasks that helps contribute to optimal database performance.

As your application performs DML (inserts updates and deletes), your indexes take a beating (in more technical terms, they become fragmented). Over time, the internal index structure yields a higher percentage of free space, which results in SQL statements reading more index pages when scanning the index. Likewise, the once organized index becomes fragmented and index pages no longer reside in contiguous pages, which results in more reads to retrieve ordered data. But no worries; as mentioned above, NuoDB is busy in the background constantly rebalancing your indexes, and now in v3.0, also refreshing the index statistics on-the-fly! Therefore, having constantly updated indexes structures — and now index statistics — ensures the NuoDB query plan optimizer is utilizing optimal index structures and updated index statistics to deliver the best query performance possible.

Additional Index Performance Improvements

As a bonus, NuoDB v3.0 indexes now offer improved query plan choices at run-time. Instead of a histogram bin size of 10, the new default is 256. This affords NuoDB the opportunity to make better indexing choices at runtime when using indexes with higher cardinality, further improving application performance.

To check your user indexes and their histogram bin size, run the following SQL command, and look at the MAXRESOLUTION column.

SQL> use system;
SQL> select h.maxresolution, i.schema, i.tablename, i.indexname, i. indextype, i.statstype, i.keycount, i.valid from indexstatistics i left join indexhistograms h on ( i.objectid = h.objectid and i.catalogid = h.catalogid)
where schema != 'SYSTEM';

How do you take advantage of these new features you ask? It’s simple. Just upgrade to NuoDB 3.0 and rebuild your user indexes. The rebuilding of the indexes will upgrade your indexes from Ver1 to Ver2 (as indicated by STATSTYPE). From this point forward, index statistics will be automatically updated periodically without user intervention. We recommend that this one-time upgrade is offline to ensure the application’s performance is not impacted.

Determining Index Version

Now you know how awesome our new indexes are, but you aren’t sure which you’re using? Here’s how you figure out whether your indexes are Ver1 or Ver2. Run the following SQL and view the STATSTYPE column:

SQL> use system; SQL> select objectid, schema, tablename, indexname, statstype from indexstatistics where schema != 'SYSTEM';

You’ll be presented with a table like this one, and you’ll want to view the STATSTYPE column to determine which index version you’re using:

To automate the process of rebuilding your user indexes after your v3.0 upgrade, please visit GitHub General Utilities for NuoDB. Click the green Download button, and follow the on-screen Read.md instructions to rebuild your user indexes. That’s it! And enjoy some time back never having to run index statistic refresh commands again!

Original Link

This Month in Database: The SQL Edition

This month in Database, we’re continuing to do something a little different. In the past, we’ve looked at a variety of topics in these posts — like in the August edition, when we talked about graph theory, Async modules, and talking to databases. But like we did for Big Data last time, we’re going to wrap up the month with a deep dive into one topic. This time, it’s SQL!


Top 5 SQL Articles on DZone

We dug through the SQL-related articles on DZone (which is a lot of articles!), and these are the cream of the crop.

  1. 6 Simple Performance Tips for SQL SELECT Statements by Mpumelelo Msimanga. Performance-tuning SELECT statements can be a time-consuming task — but 20% effort will basically give you an 80% performance improvement, so it’s well worth it.

  2. Connecting to SQL Azure With SQL Management Studio by Rob Sanders. If you want to manage your SQL databases in Azure using tools that you’re familiar and comfortable with, check out these intuitive, screen-based instructions.

  3. Custom Date Formatting in SQL Server by Boyan Kosadinov. SQL Server doesn’t always adhere to its date/time formatting. So here’s how to create your own. 

  4. 10 Easy Steps to a Complete Understanding of SQL by Lukas Eder. SQL is one of the few declarative languages out there, and as such, behaves in an entirely different way from imperative, object-oriented, or even functional languages. If you’re looking for a quick and dirty way to gain an understanding of SQL, check out this tutorial.

  5. Configuring Spring Boot for Microsoft SQL Server by John Thompson. Accessing and configuring your SQL Server instance is pretty simple — if you’re using Spring Boot. Learn about the legwork you need to do before Spring can fill in the defaults for you.

PS: Are you interested in contributing to DZone? Check out our brand new Bounty Board, where you can apply for specific writing prompts and win prizes! 


Seeing SQL Sequel

Get it? It’s like a sequel to the above section on SQL articles on DZone, and you’re seeing it. Yeah… anyways, let’s journey outside of DZone and check out some recent news, conferences, and more that should be of interest to SQL newbies and experts alike.

  • SQLBits is the largest SQL Server conference in Europe and is dedicated to spreading the knowledge of the passionate SQL community. The website claims that you’ll learn more in four days than you did in the entire last year. The conference will be taking place in London from February 21-24, 2018. Get your passes before they run out!

  • DataCore tech cranks wheezing SQL Servers to ridiculous speeds (September 26, 2017). DataCore has crafted a driver for SQL Server that runs IO requests simultaneously and increases throughput. Modern CPUs have multiple cores, with each supporting more than one threads to execute concurrently. But with this driver, parallelizing IO is like punching the hyperspace button!

  • If you are a beginner in the world of databases and SQL, the following video is for you! With nearly 600K views and an overwhelming majority of likes, this video is a fun, energetic, and clear intro to SQL and databases that just about anybody can understand. Check it out below:


Dive Deeper Into SQL

DZone has Guides and Refcardz on pretty much every tech-related topic, but if you’re specifically interested in SQL, these will appeal the most to you. 

  1. The DZone Guide to Database and Data Persistence Tools and Techniques. Discover the best database management systems, frameworks, and methods for data storage and retrieval and learn which tools and techniques developers are using for data persistence.

  2. SQL Syntax for Apache Drill: Using SQL for the SQL-on-Everything Engine. In this DZone Refcard, discover the benefits of data agility and gain faster insights into BI and analytics through advanced SQL queries on massive datasets. 


Jobs for the SQL Expert

Below are a few jobs that may pique your interest if you’re a SQL aficionado. Check them out and apply today! (PS: If you don’t see anything you like here, you can check out our Job Board with a lot more listings!)

Senior Software Development Engineer
CSG International
Location: Chicago, Illinois, United States
Experience: Minimum of 5-7 years’ experience in software development methodologies/processes, design, implementation, and documentation within the order or billing management industry; experience with database SQL and design (Oracle, MS SQL Server, Postgres, NoSQL).

Software Engineer: Backend
Optimizory Technologies Private Limited
Location: Noida, Uttar Pradesh, India
Experience: Experience programming in Java and/or Scala; using Spring, Hibernate, Play, and Akka frameworks; and using multiple databases in cluding Postgres, MySQL, MariaDB, Oracle, and SQL Server.


That’s all for September’s This Month in Big Data post! What would you like to learn about next time? Let us know in the comments!

Original Link