patternMinor
Jenkins node (agent) on shared web server - what are the right permissions
Viewed 0 times
thepermissionssharedwhatjenkinsnodeareagentwebserver
Problem
I have a shared web server with multiple sites and each site has a dedicated user used as the owner and with the group apache.
I want to install Jenkins node on the server, but how will it be able to change files and run commands like
The script on the node will basically run
I will also need to run chmod and chown at the end, but maybe for those commands I will just use the sudoers file (?).
Thanks.
I want to install Jenkins node on the server, but how will it be able to change files and run commands like
git pull, I think that using runuser will miss the point.The script on the node will basically run
git pull, drush commands (Drupal sites), rsync from remote server and more.I will also need to run chmod and chown at the end, but maybe for those commands I will just use the sudoers file (?).
Thanks.
Solution
A not very devopsy, but quite simple and effective solution with no security implications (no credential exchange or authentication required) would be to establish a file-based "message" exchange scheme, in a "mailbox" - a well-known filesystem location (setup once and owned by
When jenkins processing reaches a deployment stage for a particular site it creates a corresponding request file with the necessary information in it.
Each site user periodically (cron-driven, for example) checks for request files pertaining to their site, handle the request as appropriate, following their own site policies and provide status updates in the respective response files, which the jenkins user checks periodically.
When a request handling is completed the jenkins user removes the request file, signalling that it received the "message" and then the site user periodical job can remove the corresponding response file.
The names of the request and response files can be used to encode the particular site and request identification, so that the periodical checks don't have to fumble through multiple files.
The scheme can easily work across machines (if, for example, some of the sites are migrated to other servers) simply by placing the "mailbox" on a shared filesystem accessible from all those machines.
OK, an example, as requested. Just a basic skeleton, in python, hopefully self-documenting.
Prerequisites:
The
```
#!/usr/bin/python2.7 -u
import logging, os, re, getpass, sys, time, yaml
class Mailbox(object):
base_dir = '/var/message_box'
request_filename_format = '%s.%s.yaml' # username.id.yaml
def __init__(self):
pass
@property
def request_dir(self):
return os.path.join(self.base_dir, 'requests')
@property
def response_dir(self):
return os.path.join(self.base_dir, 'responses')
def msg_filename(self, user, request_id):
return self.request_filename_format % (user, request_id)
def request_file(self, user, request_id):
return os.path.join(self.request_dir, self.msg_filename(user, request_id))
def response_file(self, user, request_id):
return os.path.join(self.response_dir, self.msg_filename(user, request_id))
def create_msg_file(self, user, request_id, data, is_response=False):
assert user and request_id and data and isinstance(data, dict)
msg_file = self.response_file(user, request_id) if is_response else \
self.request_file(user, request_id)
with open(msg_file, 'w') as fd:
fd.write(yaml.dump(data))
def msg_file_data(self, user, request_id, is_response=False):
msg_file = self.response_file(user, request_id) if is_response else \
self.request_file(user, request_id)
if os.path.exists(msg_file):
with open(msg_file) as fd:
data = yaml.load(fd.read())
if data and isinstance(data, dict): # expected data format
return data
return None
def create_request(self, user, request_id, data):
self.create_msg_file(user, request_id, data)
logging.info('created request %s for %s' % (request_id, user))
def create_response(self, request_id, status, response_data=None):
assert status
user = getpass.getuser()
self.create_msg_file(user, request_id, {'status': status, 'data': response_data}, is_response=True)
logging.info('created response %s with status %s for %s' % (request_id, status, user))
def handle_requests(self):
user = getpass.getuser()
while True: # keep handling requests indefinitely
time.sleep(1) # new request polling rate, in seconds
for filename in os.listdir(self.request_dir):
m = re.match('(.)\.(.)\.yaml', filename)
if not m: # not a valid request filename
continue
[username, request_id] = m.groups()
if username != user: # not a request for this user
continue
if os.path.exists(self.response_file(user, request_id)):
# request handling already started
# you may add here recovery code for request handling interrupted for whatever reason
continue
msg_data = self.msg_file_data(user
root) with 2 directories:- one owned by the jenkins user which creates request files containing deployment request information, one for each site
- one owned by the apache group where each site dedicated users create their own response files containing request handling information for deployment requests for their site
When jenkins processing reaches a deployment stage for a particular site it creates a corresponding request file with the necessary information in it.
Each site user periodically (cron-driven, for example) checks for request files pertaining to their site, handle the request as appropriate, following their own site policies and provide status updates in the respective response files, which the jenkins user checks periodically.
When a request handling is completed the jenkins user removes the request file, signalling that it received the "message" and then the site user periodical job can remove the corresponding response file.
The names of the request and response files can be used to encode the particular site and request identification, so that the periodical checks don't have to fumble through multiple files.
The scheme can easily work across machines (if, for example, some of the sites are migrated to other servers) simply by placing the "mailbox" on a shared filesystem accessible from all those machines.
OK, an example, as requested. Just a basic skeleton, in python, hopefully self-documenting.
Prerequisites:
sudo mkdir -p /var/message_box/requests
sudo chown jenkins /var/message_box/requests
sudo chmod go-w /var/message_box/requests
sudo mkdir /var/message_box/responses
sudo chgrp apache /var/message_box/responses
sudo chmod g+w /var/message_box/responsesThe
mailbox.py file:```
#!/usr/bin/python2.7 -u
import logging, os, re, getpass, sys, time, yaml
class Mailbox(object):
base_dir = '/var/message_box'
request_filename_format = '%s.%s.yaml' # username.id.yaml
def __init__(self):
pass
@property
def request_dir(self):
return os.path.join(self.base_dir, 'requests')
@property
def response_dir(self):
return os.path.join(self.base_dir, 'responses')
def msg_filename(self, user, request_id):
return self.request_filename_format % (user, request_id)
def request_file(self, user, request_id):
return os.path.join(self.request_dir, self.msg_filename(user, request_id))
def response_file(self, user, request_id):
return os.path.join(self.response_dir, self.msg_filename(user, request_id))
def create_msg_file(self, user, request_id, data, is_response=False):
assert user and request_id and data and isinstance(data, dict)
msg_file = self.response_file(user, request_id) if is_response else \
self.request_file(user, request_id)
with open(msg_file, 'w') as fd:
fd.write(yaml.dump(data))
def msg_file_data(self, user, request_id, is_response=False):
msg_file = self.response_file(user, request_id) if is_response else \
self.request_file(user, request_id)
if os.path.exists(msg_file):
with open(msg_file) as fd:
data = yaml.load(fd.read())
if data and isinstance(data, dict): # expected data format
return data
return None
def create_request(self, user, request_id, data):
self.create_msg_file(user, request_id, data)
logging.info('created request %s for %s' % (request_id, user))
def create_response(self, request_id, status, response_data=None):
assert status
user = getpass.getuser()
self.create_msg_file(user, request_id, {'status': status, 'data': response_data}, is_response=True)
logging.info('created response %s with status %s for %s' % (request_id, status, user))
def handle_requests(self):
user = getpass.getuser()
while True: # keep handling requests indefinitely
time.sleep(1) # new request polling rate, in seconds
for filename in os.listdir(self.request_dir):
m = re.match('(.)\.(.)\.yaml', filename)
if not m: # not a valid request filename
continue
[username, request_id] = m.groups()
if username != user: # not a request for this user
continue
if os.path.exists(self.response_file(user, request_id)):
# request handling already started
# you may add here recovery code for request handling interrupted for whatever reason
continue
msg_data = self.msg_file_data(user
Code Snippets
sudo mkdir -p /var/message_box/requests
sudo chown jenkins /var/message_box/requests
sudo chmod go-w /var/message_box/requests
sudo mkdir /var/message_box/responses
sudo chgrp apache /var/message_box/responses
sudo chmod g+w /var/message_box/responses#!/usr/bin/python2.7 -u
import logging, os, re, getpass, sys, time, yaml
class Mailbox(object):
base_dir = '/var/message_box'
request_filename_format = '%s.%s.yaml' # username.id.yaml
def __init__(self):
pass
@property
def request_dir(self):
return os.path.join(self.base_dir, 'requests')
@property
def response_dir(self):
return os.path.join(self.base_dir, 'responses')
def msg_filename(self, user, request_id):
return self.request_filename_format % (user, request_id)
def request_file(self, user, request_id):
return os.path.join(self.request_dir, self.msg_filename(user, request_id))
def response_file(self, user, request_id):
return os.path.join(self.response_dir, self.msg_filename(user, request_id))
def create_msg_file(self, user, request_id, data, is_response=False):
assert user and request_id and data and isinstance(data, dict)
msg_file = self.response_file(user, request_id) if is_response else \
self.request_file(user, request_id)
with open(msg_file, 'w') as fd:
fd.write(yaml.dump(data))
def msg_file_data(self, user, request_id, is_response=False):
msg_file = self.response_file(user, request_id) if is_response else \
self.request_file(user, request_id)
if os.path.exists(msg_file):
with open(msg_file) as fd:
data = yaml.load(fd.read())
if data and isinstance(data, dict): # expected data format
return data
return None
def create_request(self, user, request_id, data):
self.create_msg_file(user, request_id, data)
logging.info('created request %s for %s' % (request_id, user))
def create_response(self, request_id, status, response_data=None):
assert status
user = getpass.getuser()
self.create_msg_file(user, request_id, {'status': status, 'data': response_data}, is_response=True)
logging.info('created response %s with status %s for %s' % (request_id, status, user))
def handle_requests(self):
user = getpass.getuser()
while True: # keep handling requests indefinitely
time.sleep(1) # new request polling rate, in seconds
for filename in os.listdir(self.request_dir):
m = re.match('(.*)\.(.*)\.yaml', filename)
if not m: # not a valid request filename
continue
[username, request_id] = m.groups()
if username != user: # not a request for this user
continue
if os.path.exists(self.response_file(user, request_id)):
# request handling already started
# you may add here recovery code for request handling interrupted for whatever reason
continue
msg_data = self.msg_file_data(user, request_id)
if $ ./mailbox.py -c deploy -i 20 -u dancorn -a artifact
INFO 2017-10-17 13:32:34,663 mailbox.py:49] created request 20 for dancorn
INFO 2017-10-17 13:32:35,666 mailbox.py:109] request 20 handling started
INFO 2017-10-17 13:32:40,678 mailbox.py:112] request 20 handling completed, cleaning up
$ ./mailbox.py -c deploy -i 123 -u dancorn -a artifact
INFO 2017-10-17 13:33:32,359 mailbox.py:49] created request 123 for dancorn
INFO 2017-10-17 13:33:33,362 mailbox.py:109] request 123 handling started
INFO 2017-10-17 13:33:38,375 mailbox.py:112] request 123 handling completed, cleaning up
$$ ./mailbox.py -c handler
INFO 2017-10-17 13:32:34,819 mailbox.py:77] received request 20: {'artifact': 'artifact'}
INFO 2017-10-17 13:32:34,821 mailbox.py:55] created response 20 with status in_progress for dancorn
INFO 2017-10-17 13:32:39,827 mailbox.py:55] created response 20 with status done for dancorn
INFO 2017-10-17 13:32:39,827 mailbox.py:87] handled request 20, waiting for confirmation
INFO 2017-10-17 13:32:40,828 mailbox.py:93] confirmation for request 20 received, cleaning up
INFO 2017-10-17 13:33:32,888 mailbox.py:77] received request 123: {'artifact': 'artifact'}
INFO 2017-10-17 13:33:32,889 mailbox.py:55] created response 123 with status in_progress for dancorn
INFO 2017-10-17 13:33:37,891 mailbox.py:55] created response 123 with status done for dancorn
INFO 2017-10-17 13:33:37,891 mailbox.py:87] handled request 123, waiting for confirmation
INFO 2017-10-17 13:33:38,893 mailbox.py:93] confirmation for request 123 received, cleaning upContext
StackExchange DevOps Q#2332, answer score: 2
Revisions (0)
No revisions yet.