docs.daveops.net

Snippets for yer computer needs

Distributed Systems

Zookeeper

by default listens on port 2181/tcp

# the port at which the clients will connect
clientPort=2181
# disable the per-ip limit on the number of connections if this is a non-production config
maxClientCnxns=0

by default listens on port 9092

broker properties

property desc
broker.id unique non-negative integer to be used as broker name
host.name hostname of the broker
num.partitions default number of partitions per topic
default.replication.factor default replication factor for topics
unclean.leader.election.enable choose between consistency and availability in event of an unclean leader election

kafka-topics

arg desc
–create TOPIC create a topic
–delete TOPIC delete a topic
–alter TOPIC alter partitions / replica assignment / configuration for a topic
–list list the topics
–partitions NUM number of partitions for the topic being altered
–zookeeper URLS connection string for zookeeper connection (host:port)
–topic TOPIC The topic to be created/deleted/altered
–replication-factor NUM The replication factor for each partition in the topic being created

kafka-console-producer

kafka-console-producer --broker-list localhost:9092 --topic TOPIC

kafka-console-consumer

kafka-console-consumer --zookeeper localhost:2181 --topic kafkatopic --from-beginning

The Kafka Paper

Kafka used for the on-line consumption of logs

Pub-sub model, consumers pull

Message is a payload of bytes

Storage:

leverages the filesystem page cache (ie no double buffering)

uses the Linux sendfile API call to avoid overhead

brokers use time-based retention (no knowledge of consumer state) - this allows consumer rewind

consumer groups - 1 consumer

partition in a topic is the smallest unit - to even the load, overpartition the topic

uses Zookeeper to coordinate (still?)

guarantees at-least-once delivery, not only-once delivery

no guarantee of ordering from different partitions

no replication at time of paper’s writing

by keeping feature set minimal with small storage format, was more efficient than ActiveMQ/RabbitMQ in testing

RabbitMQ

add/restart/remove cluster node

# Add a node
rabbitmqctl stop_app
rabbitmqctl join_cluster rabbit@rabbit2
rabbitmqctl start_app

# Restart a node
rabbitmqctl stop
rabbitmq-server -detached

# Remove a node (locally)
rabbitmqctl stop_app
# Remove a node (remotely)
rabbitmqctl forget_cluster_node rabbit@rabbit1

# Get cluster status
rabbitmqctl cluster_status

# Rotate logs
rabbitmqctl rotate_logs <suffix>

Ports

port desc
4369 EPMD
5672 AMQP connections
15672 mgmt interface

Re-syncing mirrors in HA mode

rabbitmqctl list_queues name slave_pids synchronised_slave_pids
# to see where it's not being synced:
rabbitmqctl list queues name synchronised_slave_pids | grep -v node_name > output

rabbitmqctl sync_queue name
# can do a sad little bash loop after cleaning the output file to just queue names
for i in `cat output` ; do sudo rabbitmqctl sync_queue $i ; done

General tuning tips

ldapsearch

flag desc
-LLL LDIF, with no comments or version
-Z Use StartTLS
-H $HOST $HOST is server
-b $SEARCHBASE $SEARCHBASE is start point for search
-x Simple auth (not SASL)
-W Prompt for simple auth
-D $BINDDN Use $BINDDN to bind to LDAP directory

RFCs

Terminology

Key Distribution Center

Holds the key database, Authentication Server, Ticket Granting Server. This might all be handled by one service. It’s super sensitive, treat is as such. Since it handles authentication, at least one KDC should be running at all times

Authentication Server

Issues the Ticket Granting Ticket. Only the correct password decrypts the TGT. Once the TGT is decrypted, it can be used to request individual service tickets.

The strength of the ticket is the strength of the password! (ie rotate PW regularly, use hard passwords).

Ticket Granting Server

Issues individual service tickets.

Realm

A sort of namespace for principals (ie users, services).

KRB4 vs. KRB5

Kerberos 4

Kerberos 5

Resources

OwnCloud