Commit f753f25db7860c15cb394ae1af4e640c83c4c6af

Authored by Antonio Terceiro
1 parent 341821af
Exists in master and in 90 other branches 3.x, add_sisp_to_chef, add_super_archives_plugin, api_for_colab, automates_core_packing, backup, backup_not_prod, cdtc_configuration, changes_in_buttons_on_content_panel, colab_automated_login, colab_spb_plugin_recipe, colab_widgets_settings, design_validation, dev-lappis, dev_env_minimal, disable_email_dev, docs, fix_breadcrumbs_position, fix_categories_software_link, fix_edit_institution, fix_edit_software_with_another_license, fix_get_license_info, fix_gitlab_assets_permission, fix_list_style_inside_article, fix_list_style_on_folder_elements, fix_members_pagination, fix_merge_request_url, fix_models_translations, fix_no_license, fix_software_api, fix_software_block_migration, fix_software_communities_translations, fix_software_communities_unit_test, fix_style_create_institution_admin_panel, fix_superarchives_imports, fix_sym_links_noosfero, focus_search_field_theme, gov-user-refactoring, gov-user-refactoring-rails4, header_fix, institution_modal_on_rating, kalibro-conf-refactoring, kalibro-processor-package, lxc_settings, margin_fix, mezuro_cookbook, performance, prezento, r3, refactor_download_block, refactor_software_communities, refactor_software_for_sisp, register_page, release-process, release-process-v2, remove-unused-images, remove_backup_emails, remove_broken_theme, remove_secondary_email_from_user, remove_sisp_buttons, removing_super_archives_email, review_message, scope2method, signals_user_noosfero, sisp_catalog_header, sisp_colab_config, sisp_dev, sisp_dev_master, sisp_simple_version, software_as_organization, software_catalog_style_fix, software_communities_html_refactor, software_infos_api, spb_minimal_env, spb_to_rails4, spec_refactor, stable-4.1, stable-4.2, stable-4.x, stable-devel, support_docs, syslog, temp_soft_comm_refactoring, theme_header, theme_javascript_refactory, thread_dropdown, thread_page, update_search_by_categories, update_software_api, update_softwares_boxes

Install redis on the database server

config/roles/database_server.rb
... ... @@ -2,6 +2,7 @@ name 'database_server'
2 2 description 'Database server'
3 3 run_list *[
4 4 'recipe[postgresql]',
  5 + 'recipe[redis]',
5 6 'recipe[postgresql::colab]',
6 7 'recipe[postgresql::gitlab]',
7 8 ]
... ...
cookbooks/redis/recipes/default.rb 0 → 100644
... ... @@ -0,0 +1,13 @@
  1 +package 'redis'
  2 +
  3 +template '/etc/redis.conf' do
  4 + user 'root'
  5 + group 'root'
  6 + mode 0644
  7 +
  8 + notifies :restart, 'service[redis]'
  9 +end
  10 +
  11 +service 'redis' do
  12 + action [:enable, :start]
  13 +end
... ...
cookbooks/redis/templates/redis.conf.erb 0 → 100644
... ... @@ -0,0 +1,763 @@
  1 +# MANAGED WITH CHEF. DO NOT MAKE MANUAL CHANGES
  2 +
  3 +# Redis configuration file example
  4 +
  5 +# Note on units: when memory size is needed, it is possible to specify
  6 +# it in the usual form of 1k 5GB 4M and so forth:
  7 +#
  8 +# 1k => 1000 bytes
  9 +# 1kb => 1024 bytes
  10 +# 1m => 1000000 bytes
  11 +# 1mb => 1024*1024 bytes
  12 +# 1g => 1000000000 bytes
  13 +# 1gb => 1024*1024*1024 bytes
  14 +#
  15 +# units are case insensitive so 1GB 1Gb 1gB are all the same.
  16 +
  17 +################################## INCLUDES ###################################
  18 +
  19 +# Include one or more other config files here. This is useful if you
  20 +# have a standard template that goes to all Redis server but also need
  21 +# to customize a few per-server settings. Include files can include
  22 +# other files, so use this wisely.
  23 +#
  24 +# Notice option "include" won't be rewritten by command "CONFIG REWRITE"
  25 +# from admin or Redis Sentinel. Since Redis always uses the last processed
  26 +# line as value of a configuration directive, you'd better put includes
  27 +# at the beginning of this file to avoid overwriting config change at runtime.
  28 +#
  29 +# If instead you are interested in using includes to override configuration
  30 +# options, it is better to use include as the last line.
  31 +#
  32 +# include /path/to/local.conf
  33 +# include /path/to/other.conf
  34 +
  35 +################################ GENERAL #####################################
  36 +
  37 +# By default Redis does not run as a daemon. Use 'yes' if you need it.
  38 +# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
  39 +daemonize no
  40 +
  41 +# When running daemonized, Redis writes a pid file in /var/run/redis.pid by
  42 +# default. You can specify a custom pid file location here.
  43 +pidfile /var/run/redis/redis.pid
  44 +
  45 +# Accept connections on the specified port, default is 6379.
  46 +# If port 0 is specified Redis will not listen on a TCP socket.
  47 +port 6379
  48 +
  49 +# TCP listen() backlog.
  50 +#
  51 +# In high requests-per-second environments you need an high backlog in order
  52 +# to avoid slow clients connections issues. Note that the Linux kernel
  53 +# will silently truncate it to the value of /proc/sys/net/core/somaxconn so
  54 +# make sure to raise both the value of somaxconn and tcp_max_syn_backlog
  55 +# in order to get the desired effect.
  56 +tcp-backlog 511
  57 +
  58 +# By default Redis listens for connections from all the network interfaces
  59 +# available on the server. It is possible to listen to just one or multiple
  60 +# interfaces using the "bind" configuration directive, followed by one or
  61 +# more IP addresses.
  62 +#
  63 +# Examples:
  64 +#
  65 +# bind 192.168.1.100 10.0.0.1
  66 +bind 127.0.0.1 <%= node['peers']['database'] %>
  67 +
  68 +# Specify the path for the Unix socket that will be used to listen for
  69 +# incoming connections. There is no default, so Redis will not listen
  70 +# on a unix socket when not specified.
  71 +#
  72 +# unixsocket /tmp/redis.sock
  73 +# unixsocketperm 700
  74 +
  75 +# Close the connection after a client is idle for N seconds (0 to disable)
  76 +timeout 0
  77 +
  78 +# TCP keepalive.
  79 +#
  80 +# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence
  81 +# of communication. This is useful for two reasons:
  82 +#
  83 +# 1) Detect dead peers.
  84 +# 2) Take the connection alive from the point of view of network
  85 +# equipment in the middle.
  86 +#
  87 +# On Linux, the specified value (in seconds) is the period used to send ACKs.
  88 +# Note that to close the connection the double of the time is needed.
  89 +# On other kernels the period depends on the kernel configuration.
  90 +#
  91 +# A reasonable value for this option is 60 seconds.
  92 +tcp-keepalive 0
  93 +
  94 +# Specify the server verbosity level.
  95 +# This can be one of:
  96 +# debug (a lot of information, useful for development/testing)
  97 +# verbose (many rarely useful info, but not a mess like the debug level)
  98 +# notice (moderately verbose, what you want in production probably)
  99 +# warning (only very important / critical messages are logged)
  100 +loglevel notice
  101 +
  102 +# Specify the log file name. Also the empty string can be used to force
  103 +# Redis to log on the standard output. Note that if you use standard
  104 +# output for logging but daemonize, logs will be sent to /dev/null
  105 +logfile /var/log/redis/redis.log
  106 +
  107 +# To enable logging to the system logger, just set 'syslog-enabled' to yes,
  108 +# and optionally update the other syslog parameters to suit your needs.
  109 +# syslog-enabled no
  110 +
  111 +# Specify the syslog identity.
  112 +# syslog-ident redis
  113 +
  114 +# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.
  115 +# syslog-facility local0
  116 +
  117 +# Set the number of databases. The default database is DB 0, you can select
  118 +# a different one on a per-connection basis using SELECT <dbid> where
  119 +# dbid is a number between 0 and 'databases'-1
  120 +databases 16
  121 +
  122 +################################ SNAPSHOTTING ################################
  123 +#
  124 +# Save the DB on disk:
  125 +#
  126 +# save <seconds> <changes>
  127 +#
  128 +# Will save the DB if both the given number of seconds and the given
  129 +# number of write operations against the DB occurred.
  130 +#
  131 +# In the example below the behaviour will be to save:
  132 +# after 900 sec (15 min) if at least 1 key changed
  133 +# after 300 sec (5 min) if at least 10 keys changed
  134 +# after 60 sec if at least 10000 keys changed
  135 +#
  136 +# Note: you can disable saving at all commenting all the "save" lines.
  137 +#
  138 +# It is also possible to remove all the previously configured save
  139 +# points by adding a save directive with a single empty string argument
  140 +# like in the following example:
  141 +#
  142 +# save ""
  143 +
  144 +save 900 1
  145 +save 300 10
  146 +save 60 10000
  147 +
  148 +# By default Redis will stop accepting writes if RDB snapshots are enabled
  149 +# (at least one save point) and the latest background save failed.
  150 +# This will make the user aware (in a hard way) that data is not persisting
  151 +# on disk properly, otherwise chances are that no one will notice and some
  152 +# disaster will happen.
  153 +#
  154 +# If the background saving process will start working again Redis will
  155 +# automatically allow writes again.
  156 +#
  157 +# However if you have setup your proper monitoring of the Redis server
  158 +# and persistence, you may want to disable this feature so that Redis will
  159 +# continue to work as usual even if there are problems with disk,
  160 +# permissions, and so forth.
  161 +stop-writes-on-bgsave-error yes
  162 +
  163 +# Compress string objects using LZF when dump .rdb databases?
  164 +# For default that's set to 'yes' as it's almost always a win.
  165 +# If you want to save some CPU in the saving child set it to 'no' but
  166 +# the dataset will likely be bigger if you have compressible values or keys.
  167 +rdbcompression yes
  168 +
  169 +# Since version 5 of RDB a CRC64 checksum is placed at the end of the file.
  170 +# This makes the format more resistant to corruption but there is a performance
  171 +# hit to pay (around 10%) when saving and loading RDB files, so you can disable it
  172 +# for maximum performances.
  173 +#
  174 +# RDB files created with checksum disabled have a checksum of zero that will
  175 +# tell the loading code to skip the check.
  176 +rdbchecksum yes
  177 +
  178 +# The filename where to dump the DB
  179 +dbfilename dump.rdb
  180 +
  181 +# The working directory.
  182 +#
  183 +# The DB will be written inside this directory, with the filename specified
  184 +# above using the 'dbfilename' configuration directive.
  185 +#
  186 +# The Append Only File will also be created inside this directory.
  187 +#
  188 +# Note that you must specify a directory here, not a file name.
  189 +dir /var/lib/redis/
  190 +
  191 +################################# REPLICATION #################################
  192 +
  193 +# Master-Slave replication. Use slaveof to make a Redis instance a copy of
  194 +# another Redis server. A few things to understand ASAP about Redis replication.
  195 +#
  196 +# 1) Redis replication is asynchronous, but you can configure a master to
  197 +# stop accepting writes if it appears to be not connected with at least
  198 +# a given number of slaves.
  199 +# 2) Redis slaves are able to perform a partial resynchronization with the
  200 +# master if the replication link is lost for a relatively small amount of
  201 +# time. You may want to configure the replication backlog size (see the next
  202 +# sections of this file) with a sensible value depending on your needs.
  203 +# 3) Replication is automatic and does not need user intervention. After a
  204 +# network partition slaves automatically try to reconnect to masters
  205 +# and resynchronize with them.
  206 +#
  207 +# slaveof <masterip> <masterport>
  208 +
  209 +# If the master is password protected (using the "requirepass" configuration
  210 +# directive below) it is possible to tell the slave to authenticate before
  211 +# starting the replication synchronization process, otherwise the master will
  212 +# refuse the slave request.
  213 +#
  214 +# masterauth <master-password>
  215 +
  216 +# When a slave loses its connection with the master, or when the replication
  217 +# is still in progress, the slave can act in two different ways:
  218 +#
  219 +# 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will
  220 +# still reply to client requests, possibly with out of date data, or the
  221 +# data set may just be empty if this is the first synchronization.
  222 +#
  223 +# 2) if slave-serve-stale-data is set to 'no' the slave will reply with
  224 +# an error "SYNC with master in progress" to all the kind of commands
  225 +# but to INFO and SLAVEOF.
  226 +#
  227 +slave-serve-stale-data yes
  228 +
  229 +# You can configure a slave instance to accept writes or not. Writing against
  230 +# a slave instance may be useful to store some ephemeral data (because data
  231 +# written on a slave will be easily deleted after resync with the master) but
  232 +# may also cause problems if clients are writing to it because of a
  233 +# misconfiguration.
  234 +#
  235 +# Since Redis 2.6 by default slaves are read-only.
  236 +#
  237 +# Note: read only slaves are not designed to be exposed to untrusted clients
  238 +# on the internet. It's just a protection layer against misuse of the instance.
  239 +# Still a read only slave exports by default all the administrative commands
  240 +# such as CONFIG, DEBUG, and so forth. To a limited extent you can improve
  241 +# security of read only slaves using 'rename-command' to shadow all the
  242 +# administrative / dangerous commands.
  243 +slave-read-only yes
  244 +
  245 +# Slaves send PINGs to server in a predefined interval. It's possible to change
  246 +# this interval with the repl_ping_slave_period option. The default value is 10
  247 +# seconds.
  248 +#
  249 +# repl-ping-slave-period 10
  250 +
  251 +# The following option sets the replication timeout for:
  252 +#
  253 +# 1) Bulk transfer I/O during SYNC, from the point of view of slave.
  254 +# 2) Master timeout from the point of view of slaves (data, pings).
  255 +# 3) Slave timeout from the point of view of masters (REPLCONF ACK pings).
  256 +#
  257 +# It is important to make sure that this value is greater than the value
  258 +# specified for repl-ping-slave-period otherwise a timeout will be detected
  259 +# every time there is low traffic between the master and the slave.
  260 +#
  261 +# repl-timeout 60
  262 +
  263 +# Disable TCP_NODELAY on the slave socket after SYNC?
  264 +#
  265 +# If you select "yes" Redis will use a smaller number of TCP packets and
  266 +# less bandwidth to send data to slaves. But this can add a delay for
  267 +# the data to appear on the slave side, up to 40 milliseconds with
  268 +# Linux kernels using a default configuration.
  269 +#
  270 +# If you select "no" the delay for data to appear on the slave side will
  271 +# be reduced but more bandwidth will be used for replication.
  272 +#
  273 +# By default we optimize for low latency, but in very high traffic conditions
  274 +# or when the master and slaves are many hops away, turning this to "yes" may
  275 +# be a good idea.
  276 +repl-disable-tcp-nodelay no
  277 +
  278 +# Set the replication backlog size. The backlog is a buffer that accumulates
  279 +# slave data when slaves are disconnected for some time, so that when a slave
  280 +# wants to reconnect again, often a full resync is not needed, but a partial
  281 +# resync is enough, just passing the portion of data the slave missed while
  282 +# disconnected.
  283 +#
  284 +# The biggest the replication backlog, the longer the time the slave can be
  285 +# disconnected and later be able to perform a partial resynchronization.
  286 +#
  287 +# The backlog is only allocated once there is at least a slave connected.
  288 +#
  289 +# repl-backlog-size 1mb
  290 +
  291 +# After a master has no longer connected slaves for some time, the backlog
  292 +# will be freed. The following option configures the amount of seconds that
  293 +# need to elapse, starting from the time the last slave disconnected, for
  294 +# the backlog buffer to be freed.
  295 +#
  296 +# A value of 0 means to never release the backlog.
  297 +#
  298 +# repl-backlog-ttl 3600
  299 +
  300 +# The slave priority is an integer number published by Redis in the INFO output.
  301 +# It is used by Redis Sentinel in order to select a slave to promote into a
  302 +# master if the master is no longer working correctly.
  303 +#
  304 +# A slave with a low priority number is considered better for promotion, so
  305 +# for instance if there are three slaves with priority 10, 100, 25 Sentinel will
  306 +# pick the one with priority 10, that is the lowest.
  307 +#
  308 +# However a special priority of 0 marks the slave as not able to perform the
  309 +# role of master, so a slave with priority of 0 will never be selected by
  310 +# Redis Sentinel for promotion.
  311 +#
  312 +# By default the priority is 100.
  313 +slave-priority 100
  314 +
  315 +# It is possible for a master to stop accepting writes if there are less than
  316 +# N slaves connected, having a lag less or equal than M seconds.
  317 +#
  318 +# The N slaves need to be in "online" state.
  319 +#
  320 +# The lag in seconds, that must be <= the specified value, is calculated from
  321 +# the last ping received from the slave, that is usually sent every second.
  322 +#
  323 +# This option does not GUARANTEES that N replicas will accept the write, but
  324 +# will limit the window of exposure for lost writes in case not enough slaves
  325 +# are available, to the specified number of seconds.
  326 +#
  327 +# For example to require at least 3 slaves with a lag <= 10 seconds use:
  328 +#
  329 +# min-slaves-to-write 3
  330 +# min-slaves-max-lag 10
  331 +#
  332 +# Setting one or the other to 0 disables the feature.
  333 +#
  334 +# By default min-slaves-to-write is set to 0 (feature disabled) and
  335 +# min-slaves-max-lag is set to 10.
  336 +
  337 +################################## SECURITY ###################################
  338 +
  339 +# Require clients to issue AUTH <PASSWORD> before processing any other
  340 +# commands. This might be useful in environments in which you do not trust
  341 +# others with access to the host running redis-server.
  342 +#
  343 +# This should stay commented out for backward compatibility and because most
  344 +# people do not need auth (e.g. they run their own servers).
  345 +#
  346 +# Warning: since Redis is pretty fast an outside user can try up to
  347 +# 150k passwords per second against a good box. This means that you should
  348 +# use a very strong password otherwise it will be very easy to break.
  349 +#
  350 +# requirepass foobared
  351 +
  352 +# Command renaming.
  353 +#
  354 +# It is possible to change the name of dangerous commands in a shared
  355 +# environment. For instance the CONFIG command may be renamed into something
  356 +# hard to guess so that it will still be available for internal-use tools
  357 +# but not available for general clients.
  358 +#
  359 +# Example:
  360 +#
  361 +# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
  362 +#
  363 +# It is also possible to completely kill a command by renaming it into
  364 +# an empty string:
  365 +#
  366 +# rename-command CONFIG ""
  367 +#
  368 +# Please note that changing the name of commands that are logged into the
  369 +# AOF file or transmitted to slaves may cause problems.
  370 +
  371 +################################### LIMITS ####################################
  372 +
  373 +# Set the max number of connected clients at the same time. By default
  374 +# this limit is set to 10000 clients, however if the Redis server is not
  375 +# able to configure the process file limit to allow for the specified limit
  376 +# the max number of allowed clients is set to the current file limit
  377 +# minus 32 (as Redis reserves a few file descriptors for internal uses).
  378 +#
  379 +# Once the limit is reached Redis will close all the new connections sending
  380 +# an error 'max number of clients reached'.
  381 +#
  382 +# maxclients 10000
  383 +
  384 +# Don't use more memory than the specified amount of bytes.
  385 +# When the memory limit is reached Redis will try to remove keys
  386 +# according to the eviction policy selected (see maxmemory-policy).
  387 +#
  388 +# If Redis can't remove keys according to the policy, or if the policy is
  389 +# set to 'noeviction', Redis will start to reply with errors to commands
  390 +# that would use more memory, like SET, LPUSH, and so on, and will continue
  391 +# to reply to read-only commands like GET.
  392 +#
  393 +# This option is usually useful when using Redis as an LRU cache, or to set
  394 +# a hard memory limit for an instance (using the 'noeviction' policy).
  395 +#
  396 +# WARNING: If you have slaves attached to an instance with maxmemory on,
  397 +# the size of the output buffers needed to feed the slaves are subtracted
  398 +# from the used memory count, so that network problems / resyncs will
  399 +# not trigger a loop where keys are evicted, and in turn the output
  400 +# buffer of slaves is full with DELs of keys evicted triggering the deletion
  401 +# of more keys, and so forth until the database is completely emptied.
  402 +#
  403 +# In short... if you have slaves attached it is suggested that you set a lower
  404 +# limit for maxmemory so that there is some free RAM on the system for slave
  405 +# output buffers (but this is not needed if the policy is 'noeviction').
  406 +#
  407 +# maxmemory <bytes>
  408 +
  409 +# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
  410 +# is reached. You can select among five behaviors:
  411 +#
  412 +# volatile-lru -> remove the key with an expire set using an LRU algorithm
  413 +# allkeys-lru -> remove any key accordingly to the LRU algorithm
  414 +# volatile-random -> remove a random key with an expire set
  415 +# allkeys-random -> remove a random key, any key
  416 +# volatile-ttl -> remove the key with the nearest expire time (minor TTL)
  417 +# noeviction -> don't expire at all, just return an error on write operations
  418 +#
  419 +# Note: with any of the above policies, Redis will return an error on write
  420 +# operations, when there are not suitable keys for eviction.
  421 +#
  422 +# At the date of writing this commands are: set setnx setex append
  423 +# incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
  424 +# sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
  425 +# zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
  426 +# getset mset msetnx exec sort
  427 +#
  428 +# The default is:
  429 +#
  430 +# maxmemory-policy volatile-lru
  431 +
  432 +# LRU and minimal TTL algorithms are not precise algorithms but approximated
  433 +# algorithms (in order to save memory), so you can select as well the sample
  434 +# size to check. For instance for default Redis will check three keys and
  435 +# pick the one that was used less recently, you can change the sample size
  436 +# using the following configuration directive.
  437 +#
  438 +# maxmemory-samples 3
  439 +
  440 +############################## APPEND ONLY MODE ###############################
  441 +
  442 +# By default Redis asynchronously dumps the dataset on disk. This mode is
  443 +# good enough in many applications, but an issue with the Redis process or
  444 +# a power outage may result into a few minutes of writes lost (depending on
  445 +# the configured save points).
  446 +#
  447 +# The Append Only File is an alternative persistence mode that provides
  448 +# much better durability. For instance using the default data fsync policy
  449 +# (see later in the config file) Redis can lose just one second of writes in a
  450 +# dramatic event like a server power outage, or a single write if something
  451 +# wrong with the Redis process itself happens, but the operating system is
  452 +# still running correctly.
  453 +#
  454 +# AOF and RDB persistence can be enabled at the same time without problems.
  455 +# If the AOF is enabled on startup Redis will load the AOF, that is the file
  456 +# with the better durability guarantees.
  457 +#
  458 +# Please check http://redis.io/topics/persistence for more information.
  459 +
  460 +appendonly no
  461 +
  462 +# The name of the append only file (default: "appendonly.aof")
  463 +
  464 +appendfilename "appendonly.aof"
  465 +
  466 +# The fsync() call tells the Operating System to actually write data on disk
  467 +# instead to wait for more data in the output buffer. Some OS will really flush
  468 +# data on disk, some other OS will just try to do it ASAP.
  469 +#
  470 +# Redis supports three different modes:
  471 +#
  472 +# no: don't fsync, just let the OS flush the data when it wants. Faster.
  473 +# always: fsync after every write to the append only log . Slow, Safest.
  474 +# everysec: fsync only one time every second. Compromise.
  475 +#
  476 +# The default is "everysec", as that's usually the right compromise between
  477 +# speed and data safety. It's up to you to understand if you can relax this to
  478 +# "no" that will let the operating system flush the output buffer when
  479 +# it wants, for better performances (but if you can live with the idea of
  480 +# some data loss consider the default persistence mode that's snapshotting),
  481 +# or on the contrary, use "always" that's very slow but a bit safer than
  482 +# everysec.
  483 +#
  484 +# More details please check the following article:
  485 +# http://antirez.com/post/redis-persistence-demystified.html
  486 +#
  487 +# If unsure, use "everysec".
  488 +
  489 +# appendfsync always
  490 +appendfsync everysec
  491 +# appendfsync no
  492 +
  493 +# When the AOF fsync policy is set to always or everysec, and a background
  494 +# saving process (a background save or AOF log background rewriting) is
  495 +# performing a lot of I/O against the disk, in some Linux configurations
  496 +# Redis may block too long on the fsync() call. Note that there is no fix for
  497 +# this currently, as even performing fsync in a different thread will block
  498 +# our synchronous write(2) call.
  499 +#
  500 +# In order to mitigate this problem it's possible to use the following option
  501 +# that will prevent fsync() from being called in the main process while a
  502 +# BGSAVE or BGREWRITEAOF is in progress.
  503 +#
  504 +# This means that while another child is saving, the durability of Redis is
  505 +# the same as "appendfsync none". In practical terms, this means that it is
  506 +# possible to lose up to 30 seconds of log in the worst scenario (with the
  507 +# default Linux settings).
  508 +#
  509 +# If you have latency problems turn this to "yes". Otherwise leave it as
  510 +# "no" that is the safest pick from the point of view of durability.
  511 +
  512 +no-appendfsync-on-rewrite no
  513 +
  514 +# Automatic rewrite of the append only file.
  515 +# Redis is able to automatically rewrite the log file implicitly calling
  516 +# BGREWRITEAOF when the AOF log size grows by the specified percentage.
  517 +#
  518 +# This is how it works: Redis remembers the size of the AOF file after the
  519 +# latest rewrite (if no rewrite has happened since the restart, the size of
  520 +# the AOF at startup is used).
  521 +#
  522 +# This base size is compared to the current size. If the current size is
  523 +# bigger than the specified percentage, the rewrite is triggered. Also
  524 +# you need to specify a minimal size for the AOF file to be rewritten, this
  525 +# is useful to avoid rewriting the AOF file even if the percentage increase
  526 +# is reached but it is still pretty small.
  527 +#
  528 +# Specify a percentage of zero in order to disable the automatic AOF
  529 +# rewrite feature.
  530 +
  531 +auto-aof-rewrite-percentage 100
  532 +auto-aof-rewrite-min-size 64mb
  533 +
  534 +################################ LUA SCRIPTING ###############################
  535 +
  536 +# Max execution time of a Lua script in milliseconds.
  537 +#
  538 +# If the maximum execution time is reached Redis will log that a script is
  539 +# still in execution after the maximum allowed time and will start to
  540 +# reply to queries with an error.
  541 +#
  542 +# When a long running script exceed the maximum execution time only the
  543 +# SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be
  544 +# used to stop a script that did not yet called write commands. The second
  545 +# is the only way to shut down the server in the case a write commands was
  546 +# already issue by the script but the user don't want to wait for the natural
  547 +# termination of the script.
  548 +#
  549 +# Set it to 0 or a negative value for unlimited execution without warnings.
  550 +lua-time-limit 5000
  551 +
  552 +################################## SLOW LOG ###################################
  553 +
  554 +# The Redis Slow Log is a system to log queries that exceeded a specified
  555 +# execution time. The execution time does not include the I/O operations
  556 +# like talking with the client, sending the reply and so forth,
  557 +# but just the time needed to actually execute the command (this is the only
  558 +# stage of command execution where the thread is blocked and can not serve
  559 +# other requests in the meantime).
  560 +#
  561 +# You can configure the slow log with two parameters: one tells Redis
  562 +# what is the execution time, in microseconds, to exceed in order for the
  563 +# command to get logged, and the other parameter is the length of the
  564 +# slow log. When a new command is logged the oldest one is removed from the
  565 +# queue of logged commands.
  566 +
  567 +# The following time is expressed in microseconds, so 1000000 is equivalent
  568 +# to one second. Note that a negative number disables the slow log, while
  569 +# a value of zero forces the logging of every command.
  570 +slowlog-log-slower-than 10000
  571 +
  572 +# There is no limit to this length. Just be aware that it will consume memory.
  573 +# You can reclaim memory used by the slow log with SLOWLOG RESET.
  574 +slowlog-max-len 128
  575 +
  576 +################################ LATENCY MONITOR ##############################
  577 +
  578 +# The Redis latency monitoring subsystem samples different operations
  579 +# at runtime in order to collect data related to possible sources of
  580 +# latency of a Redis instance.
  581 +#
  582 +# Via the LATENCY command this information is available to the user that can
  583 +# print graphs and obtain reports.
  584 +#
  585 +# The system only logs operations that were performed in a time equal or
  586 +# greater than the amount of milliseconds specified via the
  587 +# latency-monitor-threshold configuration directive. When its value is set
  588 +# to zero, the latency monitor is turned off.
  589 +#
  590 +# By default latency monitoring is disabled since it is mostly not needed
  591 +# if you don't have latency issues, and collecting data has a performance
  592 +# impact, that while very small, can be measured under big load. Latency
  593 +# monitoring can easily be enalbed at runtime using the command
  594 +# "CONFIG SET latency-monitor-threshold <milliseconds>" if needed.
  595 +latency-monitor-threshold 0
  596 +
  597 +############################# Event notification ##############################
  598 +
  599 +# Redis can notify Pub/Sub clients about events happening in the key space.
  600 +# This feature is documented at http://redis.io/topics/notifications
  601 +#
  602 +# For instance if keyspace events notification is enabled, and a client
  603 +# performs a DEL operation on key "foo" stored in the Database 0, two
  604 +# messages will be published via Pub/Sub:
  605 +#
  606 +# PUBLISH __keyspace@0__:foo del
  607 +# PUBLISH __keyevent@0__:del foo
  608 +#
  609 +# It is possible to select the events that Redis will notify among a set
  610 +# of classes. Every class is identified by a single character:
  611 +#
  612 +# K Keyspace events, published with __keyspace@<db>__ prefix.
  613 +# E Keyevent events, published with __keyevent@<db>__ prefix.
  614 +# g Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ...
  615 +# $ String commands
  616 +# l List commands
  617 +# s Set commands
  618 +# h Hash commands
  619 +# z Sorted set commands
  620 +# x Expired events (events generated every time a key expires)
  621 +# e Evicted events (events generated when a key is evicted for maxmemory)
  622 +# A Alias for g$lshzxe, so that the "AKE" string means all the events.
  623 +#
  624 +# The "notify-keyspace-events" takes as argument a string that is composed
  625 +# by zero or multiple characters. The empty string means that notifications
  626 +# are disabled at all.
  627 +#
  628 +# Example: to enable list and generic events, from the point of view of the
  629 +# event name, use:
  630 +#
  631 +# notify-keyspace-events Elg
  632 +#
  633 +# Example 2: to get the stream of the expired keys subscribing to channel
  634 +# name __keyevent@0__:expired use:
  635 +#
  636 +# notify-keyspace-events Ex
  637 +#
  638 +# By default all notifications are disabled because most users don't need
  639 +# this feature and the feature has some overhead. Note that if you don't
  640 +# specify at least one of K or E, no events will be delivered.
  641 +notify-keyspace-events ""
  642 +
  643 +############################### ADVANCED CONFIG ###############################
  644 +
  645 +# Hashes are encoded using a memory efficient data structure when they have a
  646 +# small number of entries, and the biggest entry does not exceed a given
  647 +# threshold. These thresholds can be configured using the following directives.
  648 +hash-max-ziplist-entries 512
  649 +hash-max-ziplist-value 64
  650 +
  651 +# Similarly to hashes, small lists are also encoded in a special way in order
  652 +# to save a lot of space. The special representation is only used when
  653 +# you are under the following limits:
  654 +list-max-ziplist-entries 512
  655 +list-max-ziplist-value 64
  656 +
  657 +# Sets have a special encoding in just one case: when a set is composed
  658 +# of just strings that happens to be integers in radix 10 in the range
  659 +# of 64 bit signed integers.
  660 +# The following configuration setting sets the limit in the size of the
  661 +# set in order to use this special memory saving encoding.
  662 +set-max-intset-entries 512
  663 +
  664 +# Similarly to hashes and lists, sorted sets are also specially encoded in
  665 +# order to save a lot of space. This encoding is only used when the length and
  666 +# elements of a sorted set are below the following limits:
  667 +zset-max-ziplist-entries 128
  668 +zset-max-ziplist-value 64
  669 +
  670 +# HyperLogLog sparse representation bytes limit. The limit includes the
  671 +# 16 bytes header. When an HyperLogLog using the sparse representation crosses
  672 +# this limit, it is converted into the dense representation.
  673 +#
  674 +# A value greater than 16000 is totally useless, since at that point the
  675 +# dense representation is more memory efficient.
  676 +#
  677 +# The suggested value is ~ 3000 in order to have the benefits of
  678 +# the space efficient encoding without slowing down too much PFADD,
  679 +# which is O(N) with the sparse encoding. The value can be raised to
  680 +# ~ 10000 when CPU is not a concern, but space is, and the data set is
  681 +# composed of many HyperLogLogs with cardinality in the 0 - 15000 range.
  682 +hll-sparse-max-bytes 3000
  683 +
  684 +# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
  685 +# order to help rehashing the main Redis hash table (the one mapping top-level
  686 +# keys to values). The hash table implementation Redis uses (see dict.c)
  687 +# performs a lazy rehashing: the more operation you run into a hash table
  688 +# that is rehashing, the more rehashing "steps" are performed, so if the
  689 +# server is idle the rehashing is never complete and some more memory is used
  690 +# by the hash table.
  691 +#
  692 +# The default is to use this millisecond 10 times every second in order to
  693 +# active rehashing the main dictionaries, freeing memory when possible.
  694 +#
  695 +# If unsure:
  696 +# use "activerehashing no" if you have hard latency requirements and it is
  697 +# not a good thing in your environment that Redis can reply form time to time
  698 +# to queries with 2 milliseconds delay.
  699 +#
  700 +# use "activerehashing yes" if you don't have such hard requirements but
  701 +# want to free memory asap when possible.
  702 +activerehashing yes
  703 +
  704 +# The client output buffer limits can be used to force disconnection of clients
  705 +# that are not reading data from the server fast enough for some reason (a
  706 +# common reason is that a Pub/Sub client can't consume messages as fast as the
  707 +# publisher can produce them).
  708 +#
  709 +# The limit can be set differently for the three different classes of clients:
  710 +#
  711 +# normal -> normal clients including MONITOR clients
  712 +# slave -> slave clients
  713 +# pubsub -> clients subscribed to at least one pubsub channel or pattern
  714 +#
  715 +# The syntax of every client-output-buffer-limit directive is the following:
  716 +#
  717 +# client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>
  718 +#
  719 +# A client is immediately disconnected once the hard limit is reached, or if
  720 +# the soft limit is reached and remains reached for the specified number of
  721 +# seconds (continuously).
  722 +# So for instance if the hard limit is 32 megabytes and the soft limit is
  723 +# 16 megabytes / 10 seconds, the client will get disconnected immediately
  724 +# if the size of the output buffers reach 32 megabytes, but will also get
  725 +# disconnected if the client reaches 16 megabytes and continuously overcomes
  726 +# the limit for 10 seconds.
  727 +#
  728 +# By default normal clients are not limited because they don't receive data
  729 +# without asking (in a push way), but just after a request, so only
  730 +# asynchronous clients may create a scenario where data is requested faster
  731 +# than it can read.
  732 +#
  733 +# Instead there is a default limit for pubsub and slave clients, since
  734 +# subscribers and slaves receive data in a push fashion.
  735 +#
  736 +# Both the hard or the soft limit can be disabled by setting them to zero.
  737 +client-output-buffer-limit normal 0 0 0
  738 +client-output-buffer-limit slave 256mb 64mb 60
  739 +client-output-buffer-limit pubsub 32mb 8mb 60
  740 +
  741 +# Redis calls an internal function to perform many background tasks, like
  742 +# closing connections of clients in timeout, purging expired keys that are
  743 +# never requested, and so forth.
  744 +#
  745 +# Not all tasks are performed with the same frequency, but Redis checks for
  746 +# tasks to perform accordingly to the specified "hz" value.
  747 +#
  748 +# By default "hz" is set to 10. Raising the value will use more CPU when
  749 +# Redis is idle, but at the same time will make Redis more responsive when
  750 +# there are many keys expiring at the same time, and timeouts may be
  751 +# handled with more precision.
  752 +#
  753 +# The range is between 1 and 500, however a value over 100 is usually not
  754 +# a good idea. Most users should use the default of 10 and raise this up to
  755 +# 100 only in environments where very low latency is required.
  756 +hz 10
  757 +
  758 +# When a child rewrites the AOF file, if the following option is enabled
  759 +# the file will be fsync-ed every 32 MB of data generated. This is useful
  760 +# in order to commit the file to the disk more incrementally and avoid
  761 +# big latency spikes.
  762 +aof-rewrite-incremental-fsync yes
  763 +
... ...
test/redis_test.sh 0 → 100644
... ... @@ -0,0 +1,11 @@
  1 +. $(dirname $0)/test_helper.sh
  2 +
  3 +test_redis_running() {
  4 + assertTrue "redis running" 'run_on database pgrep -f redis'
  5 +}
  6 +
  7 +test_redis_listens_on_local_network() {
  8 + assertTrue 'redis listening on local network' 'nc -z -w 1 $database 6379'
  9 +}
  10 +
  11 +. shunit2
... ...