News-Feeds

Jetzt warnt auch der US-Verteidigungsminister Russland vor der Beeinflussung der Wahl

Telepolis - 7 hours 48 min ago
Man werde Versuche nicht übergehen, in die demokratischen Prozesse einzugreifen, sagte Carter mit Verweis auf die "russische Aggression"
Kategorien: Politik + Kultur

Alle Angaben ohne Gewähr

Telepolis - 7 hours 48 min ago
Lückenpresse - Das Ende des Journalismus, wie wir ihn kannten
Kategorien: Politik + Kultur

Bringt die CSU-Vorstandsklausur am Wochenende die Spaltung?

Telepolis - Di, 2016-09-06 22:00
Seehofer will "sehen, ob wir uns einigen können, mit der CDU"
Kategorien: Politik + Kultur

Demografische Spaltungen in Europa

Telepolis - Di, 2016-09-06 22:00
Nach Berechnungen des Wittengenstein Centre wird es mit und ohne Einbeziehung der Migration bis 2050 zu starken demografischen Verschiebungen kommen
Kategorien: Politik + Kultur

MyRocks Docker images

MySQL High Performance - Di, 2016-09-06 20:28

In this post, I’ll point you to MyRocks Docker images with binaries, allowing you to install and play with the software.

During the @Scale conference, Facebook announced that MyRocks is mature enough that it has been installed on 5% of Facebook’s MySQL slaves. This has saved 50% of the space on these slaves, which allows them to decrease the number of servers by half. Check out the announcement here:  https://code.facebook.com/posts/190251048047090/myrocks-a-space-and-write-optimized-mysql-database/

Those are pretty impressive numbers, so I decided to take a serious look at MyRocks. The biggest showstopper is usually binary availability, since Facebook only provides the source code: https://github.com/facebook/mysql-5.6.

You can get the image from https://hub.docker.com/r/perconalab/myrocks/.

To start MyRocks:

docker run -d --name myr -P  perconalab/myrocks

To access it, use a regular MySQL client:

mysql -h127.0.0.1

From there you should see RocksDB installed:

show engines; +------------+---------+----------------------------------------------------------------+--------------+------+------------+ | Engine | Support | Comment | Transactions | XA | Savepoints | +------------+---------+----------------------------------------------------------------+--------------+------+------------+ | ROCKSDB | DEFAULT | RocksDB storage engine | YES | YES | YES |

I hope it makes easier to start experimenting with MyRocks!

MongoDB at Percona Live Europe

MySQL High Performance - Di, 2016-09-06 15:28

This year, you will find a great deal about MongoDB at Percona Live Europe.

As we continue to work on growing the independent MongoDB ecosystem, this year’s Percona Live Europe in Amsterdam includes many talks about MongoDB. If your company uses MongoDB technologies, is focused exclusively on developing with MongoDB or MongoDB operations, or is just evaluating MongoDB, attending Percona Live Europe will prove a valuable experience.  

As always with Percona Live conferences, the focus is squarely on the technical content — not sales pitches. We encourage our speakers to tell the truth: the good, the bad and the ugly. There is never a “silver bullet” when it comes to technology — only tradeoffs between different solution options.

As someone who has worked in database operations for more than 15 years, I recognize and respect the value of “negative information.” I like knowing what does not work, what you should not do and where trouble lies. Negative information often proves more valuable than knowing how great the features of a specific technology work — especially since the product’s marketing team tends to highlight those very well (and they seldom require independent coverage).

For MongoDB at this year’s Percona Live Europe:
  • We have talks about MongoRocks, a RocksDB powered storage engine for MongoDB — the one you absolutely need to know about if you’re looking to run the most efficient MongoDB deployment at scale!  
  • We will cover MongoDB Backups best practices, as well as several talks about MongoDB monitoring and management  (1, 2, 3) — all of them with MongoDB Community Edition and Percona Server for MongoDB (so they don’t require a MongoDB Enterprise subscription).

There will also be a number of talks about how MongoDB interfaces with other technologies. We show how ToroDB can use the MongoDB protocol while storing data in a relational database (and why that might be a good idea), we contrast and compare MySQL and MongoDB Geospatial features, and examine MongoDB from MySQL DBA point of view.

We also how to use Apache Spark to unify data from MongoDB, MySQL, and Redis, and what are generally the best practices for choosing databases for different application needs.

Finally, if you’re just starting with MongoDB and would like a jump start before attending more detailed MongoDB talks, we’ve got a full day MongoDB 101 tutorial for you.

Join us for the full conference, or register for just one day if that is all your schedule allows. But come to Percona Live Europe in Amsterdam on October 3-5 to get the best and latest MongoDB information.

Die USA sollen Opfer einer groß angelegten "russischen Beeinflussungsoperation" sein

Telepolis - Mo, 2016-09-05 22:00
Ein Artikel in der Washington Post mit den üblichen anonymen "Offiziellen" zeigt, wie strategische Kommunikation mit der Hilfe von freien Medien funktioniert
Kategorien: Politik + Kultur

Exzellente Entqualifizierung

Telepolis - Mo, 2016-09-05 22:00
Das neue akademische Prekariat
Kategorien: Politik + Kultur

AfD: Die Phantompartei, die die Republik verändert

Telepolis - So, 2016-09-04 22:00
Die Attraktivität der AfD ist weiter der Protest der Ängstlichen ohne ein anderes Ziel als gegen Einwanderung und für Sicherheit
Kategorien: Politik + Kultur

Von Vergewaltigungen und Fortpflanzungsmissbrauch

Telepolis - Sa, 2016-09-03 22:00
Wann wird aus einvernehmlichem Sex eine Vergewaltigung? - Eine Frage, die auch beim Fortpflanzungsmissbrauch eine zentrale Rolle spielt
Kategorien: Politik + Kultur

"Das Schachbrett des Teufels"

Telepolis - Sa, 2016-09-03 22:00
Interview mit dem Allen-Dulles-Biographen David Talbot
Kategorien: Politik + Kultur

InnoDB Troubleshooting: Q & A

MySQL High Performance - Fr, 2016-09-02 21:12

In this blog, I will provide answers to the Q & A for the InnoDB Troubleshooting webinar.

First, I want to thank everybody for attending the August 11 webinar. The recording and slides for the webinar are available here. Below is the list of your questions that I wasn’t able to answer during the webinar, with responses:

Q: What’s a good speed for buffer pool speed/size for maximum query performance?

A: I am sorry, I don’t quite understand the question. InnoDB buffer pool is an in-memory buffer. In an ideal case, your whole active dataset (rows that are accessed by application regularly) should be in the buffer pool. There is a good blog post by Peter Zaitsev describing how to find the best size for the buffer pool.

Q: Any maximum range for these InnoDB options?

A: I am again sorry, I only see questions after the webinar and don’t know which slide you were on when you asked about options. But generally speaking, the maximum ranges should be limited by hardware: the size of InnoDB buffer pool limited by the amount of physical memory you have, the size of innodb_io_capacity  limited by the number of IOPS which your disk can handle, and the number of concurrent threads limited by the number of CPU cores.

Q: On a AWS r3.4xlarge, 16 CPU, 119GB of RAM, EBS volumes, what innodb_thread_concurrency, innodb_read_io_threads, innodb_write_io_threads would you recommend? and innodb_read_io_capacity?

A: innodb_thread_concurrency = 16, innodb_read_io_threads = 8, innodb_write_io_threads = 8, innodb_io_capacity — but it depends on the speed of your disks. As far as I know, AWS offers disks with different speeds. You should consult IOPS about what your disks can handle when setting innodb_io_capacity, and “Max IOPS” when setting innodb_io_capacity_max.

Q: About InnoDB structures and parallelism: Are there InnoDB settings that can prevent or reduce latching (causes semaphore locks and shutdown after 600s) that occur trying to add an index object to memory but only DML queries on the primary key are running?

A: Unfortunately, semaphore locks for the CREATE INDEX command are not avoidable. You only can affect other factors that speed up index creation. For example, how fast you write records to the disk or how many concurrent queries you run. Kill queries that are waiting for a lock too long. There is an old feature request asking to handle long semaphore waits gracefully. Consider clicking “Affects Me” button to bring it to the developers’ attention.

Q: How can we check these threads?

A: I assume you are asking about InnoDB threads? You can find information about running threads in SHOW ENGINE INNODB STATUS :

-------- FILE I/O -------- I/O thread 0 state: waiting for completed aio requests (insert buffer thread) I/O thread 1 state: waiting for completed aio requests (log thread) I/O thread 2 state: waiting for completed aio requests (read thread) I/O thread 3 state: waiting for completed aio requests (read thread) I/O thread 4 state: waiting for completed aio requests (read thread) I/O thread 5 state: waiting for completed aio requests (read thread) I/O thread 6 state: waiting for completed aio requests (write thread) I/O thread 7 state: waiting for completed aio requests (write thread) I/O thread 8 state: waiting for completed aio requests (write thread) I/O thread 9 state: waiting for completed aio requests (write thread) Pending normal aio reads: 0 [0, 0, 0, 0] , aio writes: 0 [0, 0, 0, 0] , ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0 Pending flushes (fsync) log: 1; buffer pool: 0 529 OS file reads, 252 OS file writes, 251 OS fsyncs 0.74 reads/s, 16384 avg bytes/read, 7.97 writes/s, 7.94 fsyncs/s

And in the Performance Schema THREADS table:

mysql> select thread_id, name, type from performance_schema.threads where name like '%innodb%'; +-----------+----------------------------------------+------------+ | thread_id | name | type | +-----------+----------------------------------------+------------+ | 2 | thread/innodb/io_handler_thread | BACKGROUND | | 3 | thread/innodb/io_handler_thread | BACKGROUND | | 4 | thread/innodb/io_handler_thread | BACKGROUND | | 5 | thread/innodb/io_handler_thread | BACKGROUND | | 6 | thread/innodb/io_handler_thread | BACKGROUND | | 7 | thread/innodb/io_handler_thread | BACKGROUND | | 8 | thread/innodb/io_handler_thread | BACKGROUND | | 9 | thread/innodb/io_handler_thread | BACKGROUND | | 10 | thread/innodb/io_handler_thread | BACKGROUND | | 11 | thread/innodb/io_handler_thread | BACKGROUND | | 13 | thread/innodb/srv_lock_timeout_thread | BACKGROUND | | 14 | thread/innodb/srv_monitor_thread | BACKGROUND | | 15 | thread/innodb/srv_error_monitor_thread | BACKGROUND | | 16 | thread/innodb/srv_master_thread | BACKGROUND | | 17 | thread/innodb/srv_purge_thread | BACKGROUND | | 18 | thread/innodb/page_cleaner_thread | BACKGROUND | | 19 | thread/innodb/lru_manager_thread | BACKGROUND | +-----------+----------------------------------------+------------+ 17 rows in set (0.00 sec)

Q: Give brief on InnoDB thread is not same as connection thread.

A: You create a MySQL connection thread each time the client connects to the server. Generally, the lifetime of this thread is the same as the connection (I won’t discuss the thread cache and thread pool plugin here to avoid unnecessary complexity). This way, if you have 100 connections you have 100 connection threads. But not all of these threads do something. Some are actively querying MySQL, but others are sleeping. You can find the number of threads actively doing something if you examine the status variable Threads_running. InnoDB doesn’t create as many threads as connections to perform its job effectively. It creates fewer threads (ideally, it is same as the number of CPU cores). So, for example just 16 InnoDB threads can handle100 and more connection threads effectively.

Q: How can we delete bulk data in Percona XtraDB Cluster?  without affecting production? nearly 6 million records worth 40 GB size table

A: You can use the utility pt-archiver. It deletes rows in chunks. While your database will still have to handle all these writes, the option --max-flow-ctl  pauses a purge job if the cluster spent too much time pausing for flow control.

Q: Why do we sometimes get “–tc-heuristic-recover” message in error logs? Especially when we recover after a crash? What does this indicate? And should we commit or rollback?

A: This means you used two transactional engines that support XA in the same transaction, and mysqld crashed in the middle of the transaction. Now mysqld cannot determine which strategy to use when recovering transactions: either COMMIT or ROLLBACK. Strangely, this option is documented as “not used”. It certainly is, however. Test case for bug #70860 proves it. I reported a documentation bug #82780.

Q: Which parameter controls the InnoDB thread count?

A: The main parameter is innodb_thread_concurrency. For fine tuning, use innodb_read_io_threads, innodb_write_io_threads, innodb_purge_threads, innodb_page_cleaners. Q:

Q: At what frequency will the InnoDB status be dumped in a file by using innodb-status-file?

A: Approximately every 15 seconds, but it can vary slightly depending on the server load.

Q: I faced an issue that once disk got detached from running server due to some issue on AWS ec2. MySQL went to default mode. After MySQL stopped and started, we observed slave skipped some around 15 mins data. We got it by foreign key relationship issue. Can you please explain why it was skipped data in slave?

A: Amazon Aurora supports two kinds of replication: physical as implemented by Amazon (this is the default for replicas in the same region), and the regular asynchronous replication for cross-region replication. If you use the former, I cannot help you because this is a closed-source Amazon feature. You need to report a bug to Amazon. If you used the latter, this looks buggy too. According to my experience, it should not happen. With regular replication you need to check which transactions were applied (best if you use GTIDs, or at least the log-slave-updates option) and which were not. If you find a gap, report a bug at bugs.mysql.com.

Q: Can you explain more about adaptive hash index?

A: InnoDB stores its indexes on disks as a B-Tree. While B-Tree indexes are effective in general, some queries can take advantage of using much simpler hash indexes. While your server is in use, InnoDB analyzes the queries it is currently processing and builds an in-memory hash index inside the buffer pool (using the prefix of the B-Tree key). While adaptive hash index generally works well, “with some workloads, the speedup from hash index lookups greatly outweighs the extra work to monitor index lookups and maintain the hash index structure” Another issue with adaptive hash index is that until version 5.7.8, it was protected by a single latch — which could be a contention point under heavy workloads. Since 5.7.8, adaptive hash index can be partitioned. The number of parts is controlled by option innodb_adaptive_hash_index_parts.

Save

MHA Quick Start Guide

MySQL High Performance - Fr, 2016-09-02 19:22

MHA (Master High Availability Manager and tools for MySQL) is one of the most important pieces of our managed services. When properly set up, it can check replication health, move writer and reader virtual IPs, perform failovers, and have its output constantly monitored by Nagios. Is it easy to deploy and follows the KISS (Keep It Simple, Stupid) philosophy that I love so much.

This blog post is a quick start guide to try it out and play with it in your own testing environment. I assume that you already know how to install software, deal with SSH keys and setup replication in MySQL. The post just covers MHA configuration.

Testing environment

Taken from /etc/hosts

192.168.1.116 mysql-server1 192.168.1.117 mysql-server2 192.168.1.118 mysql-server3 192.168.1.119 mha-manager

mysql-server1: Our master MySQL server with 5.6
mysql-server2: Slave server
mysql-server3: Slave server
mha-manager: The server monitors the replication and from where we manage MHA. The installation is also required to meet some Perl dependencies.

We just introduced some new concepts, the MHA Node and MHA Manager:

MHA Node

It is installed and runs on each MySQL server. This is the piece of software that it is invoked by the manager every time we want to do something, like for example a failover or a check.

MHA Manager

As explained before, this is our operations center. The manager monitors the services, replication, and includes several administrative command lines.

Pre-requisites
  • Replication must already be running. MHA manages replication and monitors it, but it is not a tool to deploy it. So MySQL and replication need to be running already.
  • All hosts should be able to connect to each other using public SSH keys.
  • All nodes need to be able to connect to each other’s MySQL servers.
  • All nodes should have the same replication user and password.
  • In the case of multi-master setups, only one writable node is allowed. All others need to be configured with read_only.
  • MySQL version has to be 5.0 or later.
  • Candidates for master failover should have binary log enabled. The replication user must exist there too.
  • Binary log filtering variables should be the same on all servers (replicate-wild%, binlog-do-db…).
  • Disable automatic relay-log purge and do it regularly from a cron task. You can use an MHA-included script called “purge_relay_logs”.

While that is a large list of requisites, I think that they are pretty standard and logical.

MHA installation

As explained before, the MHA Node needs to be installed on all the nodes. You can download it from this Google Drive link.

This post shows you how to install it using the source code, but there are RPM packages available. Deb too, but only for older versions. Use the installation method you prefer. This is how to compile it:

tar -xzf mha4mysql-node-0.57.tar.gz perl Makefile.PL make make install

The commands included in the node package are save_binary_logs, filter_mysqlbinlog, purge_relay_logs, apply_diff_relay_logs. Mostly tools that the manager needs to call in order to perform a failover, while trying to minimize or avoid any data loss.

On the manager server, you need to install MHA Node plus MHA Manager. This is due to MHA Manager dependance on a Perl library that comes with MHA Node. The installation process is just the same.

Configuration

We only need one configuration file on the Manager node. The example below is a good starting point:

# cat /etc/app1.cnf [server default] # mysql user and password user=root password=supersecure ssh_user=root # working directory on the manager manager_workdir=/var/log/masterha/app1 # working directory on MySQL servers remote_workdir=/var/log/masterha/app1 [server1] hostname=mysql-server1 candidate_master=1 [server2] hostname=mysql-server2 candidate_master=1 [server3] hostname=mysql-server3 no_master=1

So pretty straightforward. It specifies that there are three servers, two that can be master and one that can’t be promoted to master.

Let’s check if we meet some of the pre-requisites. We are going to test if replication is working, can be monitored, and also if SSH connectivity works.

# masterha_check_ssh --conf=/etc/app1.cnf [...] [info] All SSH connection tests passed successfully.

It works. Now let’s check MySQL:

# masterha_check_repl --conf=/etc/app1.cnf [...] MySQL Replication Health is OK.

Start the manager and operations

Everything is setup, we meet the pre-requisites. We can start our manager:

# masterha_manager --remove_dead_master_conf --conf=/etc/app1.cnf [...] [info] Starting ping health check on mysql-server1(192.168.1.116:3306).. [info] Ping(SELECT) succeeded, waiting until MySQL doesn't respond..

The manager found our master and it is now actively monitoring it using a SELECT command. –remove_dead_master_conf tells the manager that if the master goes down, it must edit the config file and remove the master’s configuration from it after a successful failover. This avoids the “there is a dead slave” error when you restart the manager. All servers listed in the conf should be part of the replication and in good health, or the manager will refuse to work.

Automatic and manual failover

Good, everything is running as expected. What happens if the MySQL master dies!?!

[...] [warning] Got error on MySQL select ping: 2006 (MySQL server has gone away) [info] Executing SSH check script: save_binary_logs --command=test --start_pos=4 --binlog_dir=/var/lib/mysql,/var/log/mysql --output_file=/var/log/masterha/app1/save_binary_logs_test --manager_version=0.57 --binlog_prefix=mysql-bin Creating /var/log/masterha/app1 if not exists.. ok. Checking output directory is accessible or not.. ok. Binlog found at /var/log/mysql, up to mysql-bin.000002 [info] HealthCheck: SSH to mha-server1 is reachable. [...]

First, it tries to connect by SSH to read the binary log and save it. MHA can apply the missing binary log events to the remaining slaves so they are up to date with all the before-failover info. Nice!

Theses different phases follow:

* Phase 1: Configuration Check Phase.. * Phase 2: Dead Master Shutdown Phase.. * Phase 3: Master Recovery Phase.. * Phase 3.1: Getting Latest Slaves Phase.. * Phase 3.2: Saving Dead Master's Binlog Phase.. * Phase 3.3: Determining New Master Phase.. [info] Finding the latest slave that has all relay logs for recovering other slaves.. [info] All slaves received relay logs to the same position. No need to resync each other. [info] Starting master failover.. [info] From: mysql-server1(192.168.1.116:3306) (current master) +--mysql-server2(192.168.1.117:3306) +--mysql-server3(192.168.1.118:3306) To: mysql-server2(192.168.1.117:3306) (new master) +--mysql-server3(192.168.1.118:3306) * Phase 3.3: New Master Diff Log Generation Phase.. * Phase 3.4: Master Log Apply Phase.. * Phase 4: Slaves Recovery Phase.. * Phase 4.1: Starting Parallel Slave Diff Log Generation Phase.. * Phase 4.2: Starting Parallel Slave Log Apply Phase.. * Phase 5: New master cleanup phase..

The phases are pretty self-explanatory. MHA tries to get all the data possible from the master’s binary log and slave’s relay log (the one that is more advanced) to avoid losing any data or promote a slave that it was far behind the master. So it tries to promote a slave with the most current data as possible. We see that server2 has been promoted to master, because in our configuration we specified that server3 shouldn’t be promoted.

After the failover, the manager service stops itself. If we check the config file, the failed server is not there anymore. Now the recovery is up to you. You need to get the old master back in the replication chain, then add it again to the config file and start the manager.

It is also possible to perform a manual failover (if, for example, you need to do some maintenance on the master server). To do that you need to:

  • Stop masterha_manager.
  • Run masterha_master_switch –master_state=alive –conf=/etc/app1.cnf. The line says that you want to switch the master, but the actual master is still alive, so no need to mark it as dead or remove it from the conf file.

And that’s it. Here is part of the output. It shows the tool making the decision on the new topology and asking the user for confirmation:

[info] From: mysql-server1(192.168.1.116:3306) (current master) +--mysql-server2(192.168.1.117:3306) +--mysql-server3(192.168.1.118:3306) To: mysql-server2(192.168.1.117:3306) (new master) +--mysql-server3(192.168.1.118:3306) Starting master switch from mha-server1(192.168.1.116:3306) to mha-server2(192.168.1.117:3306)? (yes/NO): yes [...] [info] Switching master to mha-server2(192.168.1.117:3306) completed successfully.

You can also employ some extra parameters that are really useful in some cases:

–orig_master_is_new_slave: if you want to make the old master a slave of the new one.

–running_updates_limit: if the current master executes write queries that take more than this parameter’s setting, or if any of the MySQL slaves behind master take more than this parameter, the master switch aborts. By default, it’s 1 (1 second). All these checks are for safety reasons.

–interactive=0: if you want to skip all the confirmation requests and questions masterha_master_switch could ask.

Check this link in case you use GTID and want to avoid problems with errant transactions during the failover:

https://www.percona.com/blog/2015/12/02/gtid-failover-with-mysqlslavetrx-fix-errant-transactions/

Custom scripts

Since this is a quick guide to start playing around with MHA, I won’t cover advanced topics in detail. But I will mention a few:

    • Custom scripts. MHA can move IPs around, shutdown a server and send you a report in case something happens. It needs a custom script, however. MHA comes with some example scripts, but you would need to write one that fits your environment.The directives are master_ip_failover_script, shutdown_script, report_script. With them configured, MHA will send you an email or a message to your mobile device in the case of a failover, shutdown the server and move IPs between servers. Pretty nice!

Hope you found this quickstart guide useful for your own tests. Remember, one of the most important things: don’t overdo automation!  

Spanien nach neun Monaten ohne Regierung vor Neuwahlen

Telepolis - Fr, 2016-09-02 19:14
Der konservative Rajoy ist auch im zweiten Wahlgang mit einer Regierungsbildung gescheitert
Kategorien: Politik + Kultur

Anti-Terror-Justiz: Im Zweifel für die härtere Strafe

Telepolis - Fr, 2016-09-02 16:00
Der Schutz der Gesellschaft hat Vorrang. Der Staatsanwalt François Molins erklärt das Umdenken in Frankreich
Kategorien: Politik + Kultur

Afghanistan: Von Königen und Banditen

Telepolis - Fr, 2016-09-02 15:00
Alte ethnische und ideologisch-religiöse Konflikte bestimmen weiter die Lage in Afghanistan
Kategorien: Politik + Kultur

"Das wird den Flugverkehr auf dieser Strecke marginalisieren"

Telepolis - Fr, 2016-09-02 14:00
Die Bahn will nach über einem Vierteljahrhundert Planung und Bau in vier Stunden von Berlin nach München fahren - wenn sie es schafft, pünktlich zu sein
Kategorien: Politik + Kultur

Der Wurm, wie er in die Welt kam

Telepolis - Fr, 2016-09-02 13:00
Der Dokumentar-Thriller "Zero Days": Fetzen des Stuxnet-Codes, Erklärungen des CIA und der Cyberwar in einem journalistischen Blockbuster
Kategorien: Politik + Kultur

Die AfD-Photoshopper, die arme NPD und das dümm besiedelte Meck-Pomm

Telepolis - Fr, 2016-09-02 11:00
YouTube und Co. - unsere wöchentliche Telepolis-Videoschau
Kategorien: Politik + Kultur

Bundesregierung: Doch (k)eine Distanzierung von der Armenien-Resolution?

Telepolis - Fr, 2016-09-02 10:00
Erwartet wurde, dass Regierungssprecher Seibert eine ausgehandelte Formel verkündet, um den Streit über das Besuchsrecht in Incirlik beizulegen. Update
Kategorien: Politik + Kultur