Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create list using list comprehension #644

Open
wants to merge 7 commits into
base: release/v1.12.6
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 10 additions & 3 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -341,7 +341,7 @@ Then you can use this to deploy that zipfile:
asyncId = result.get('asyncId')
state = result.get('state')

Both deploy and checkDeployStatus take keyword arguements. The single package arguement is not currently available to be set for deployments. More details on the deploy options can be found at https://developer.salesforce.com/docs/atlas.en-us.api_meta.meta/api_meta/meta_deploy.htm
Both deploy and checkDeployStatus take keyword arguments. The single package argument is not currently available to be set for deployments. More details on the deploy options can be found at https://developer.salesforce.com/docs/atlas.en-us.api_meta.meta/api_meta/meta_deploy.htm

You can check on the progress of the deploy which returns a dictionary with status, state_detail, deployment_detail, unit_test_detail:

Expand Down Expand Up @@ -419,7 +419,7 @@ To retrieve a list of top level description of instance metadata, user:
Using Bulk
--------------------------

You can use this library to access Bulk API functions. The data element can be a list of records of any size and by default batch sizes are 10,000 records and run in parrallel concurrency mode. To set the batch size for insert, upsert, delete, hard_delete, and update use the batch_size argument. To set the concurrency mode for the salesforce job the use_serial argument can be set to use_serial=True.
You can use this library to access Bulk API functions. The data element can be a list of records of any size and by default batch sizes are 10,000 records and run in parallel concurrency mode. To set the batch size for insert, upsert, delete, hard_delete, and update use the batch_size argument. To set the concurrency mode for the salesforce job the use_serial argument can be set to use_serial=True.

Create new records:

Expand All @@ -443,7 +443,7 @@ Update existing records:

sf.bulk.Contact.update(data,batch_size=10000,use_serial=True)

Update existing records and update lookup fields from an external id field:
Update existing records and update lookup fields from an external id field:

.. code-block:: python

Expand Down Expand Up @@ -665,6 +665,13 @@ Generate Pandas Dataframe from SFDC Bulk API Query (ex.bulk.Account.query)
df = pd.DataFrame.from_dict(data,orient='columns').drop('attributes',axis=1)


YouTube Tutorial
--------------------------
Here is a helpful `YouTube tutorial`_ which shows how you can manage records in bulk using a jupyter notebook, simple-salesforce and pandas.

This can be a effective way to manage records, and perform simple operations like reassigning accounts, deleting test records, inserting new records, etc...

.. _YouTube tutorial: https://youtu.be/nPQFUgsk6Oo?t=282

Authors & License
--------------------------
Expand Down
2 changes: 1 addition & 1 deletion docs/user_guide/other_REST_examples.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ In addition to built in functions that allow integrations with the Salesforce Ap

.. _SFDC Submit documentation: https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/dome_process_approvals_submit.htm

1. Import any dependant packages to your script. For this example we use Simple Salesforce, Requests and the built in JSON package:
1. Import any dependent packages to your script. For this example we use Simple Salesforce, Requests and the built in JSON package:

.. code-block:: python

Expand Down
58 changes: 29 additions & 29 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,33 +26,33 @@
package_data={
'simple_salesforce': ['metadata.wsdl'],
},
install_requires = [
'requests>=2.22.0',
'pyjwt',
'zeep'
],
tests_require = [
'pytest',
'pytz>=2014.1.1',
'responses>=0.5.1',
'cryptography<3.4',
],
test_suite = 'simple_salesforce.tests',
keywords = about[
'__keywords__'],
classifiers = [
'Development Status :: 5 - Production/Stable',
'License :: OSI Approved :: Apache Software License',
'Intended Audience :: Developers',
'Intended Audience :: System Administrators',
'Operating System :: OS Independent',
'Topic :: Internet :: WWW/HTTP',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
'Programming Language :: Python :: 3.10',
'Programming Language :: Python :: 3.11',
'Programming Language :: Python :: Implementation :: PyPy'
]
install_requires = [
'requests>=2.22.0',
'cryptography',
'zeep',
'pyjwt'
],
tests_require = [
'pytest',
'pytz>=2014.1.1',
'responses>=0.5.1',
'cryptography<3.4',
],
test_suite = 'simple_salesforce.tests',
keywords = about['__keywords__'],
classifiers = [
'Development Status :: 5 - Production/Stable',
'License :: OSI Approved :: Apache Software License',
'Intended Audience :: Developers',
'Intended Audience :: System Administrators',
'Operating System :: OS Independent',
'Topic :: Internet :: WWW/HTTP',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
'Programming Language :: Python :: 3.10',
'Programming Language :: Python :: 3.11',
'Programming Language :: Python :: Implementation :: PyPy'
]
)
3 changes: 2 additions & 1 deletion simple_salesforce/api.py
Original file line number Diff line number Diff line change
Expand Up @@ -319,7 +319,8 @@ def __getattr__(self, name):

return SFType(
name, self.session_id, self.sf_instance, sf_version=self.sf_version,
proxies=self.proxies, session=self.session, salesforce=self)
proxies=self.proxies, session=self.session, salesforce=self,
object_pairs_hook=self._object_pairs_hook)

# User utility methods
def set_password(self, user, password):
Expand Down
2 changes: 1 addition & 1 deletion simple_salesforce/login.py
Original file line number Diff line number Diff line change
Expand Up @@ -171,7 +171,7 @@ def SalesforceLogin(
expiration = datetime.now(timezone.utc) + timedelta(minutes=3)
payload = {
'iss': consumer_key,
'sub': username,
'sub': unescape(username),
'aud': f'https://{domain}.salesforce.com',
'exp': f'{expiration.timestamp():.0f}'
}
Expand Down
48 changes: 25 additions & 23 deletions simple_salesforce/metadata.py
Original file line number Diff line number Diff line change
Expand Up @@ -396,10 +396,8 @@ def check_deploy_status(self, async_process_id, **kwargs):
result.find('mt:numberComponentErrors', self._XML_NAMESPACES).text)
if state == 'Failed' or failed_count > 0:
# Deployment failures
failures = result.findall('mt:details/mt:componentFailures',
self._XML_NAMESPACES)
for failure in failures:
deployment_errors.append({
deployment_errors = [
{
'type': failure.find('mt:componentType',
self._XML_NAMESPACES).text,
'file': failure.find('mt:fileName',
Expand All @@ -408,21 +406,25 @@ def check_deploy_status(self, async_process_id, **kwargs):
self._XML_NAMESPACES).text,
'message': failure.find('mt:problem',
self._XML_NAMESPACES).text
})
}
for failure in result.findall('mt:details/mt:componentFailures',
self._XML_NAMESPACES)
]
# Unit test failures
failures = result.findall(
'mt:details/mt:runTestResult/mt:failures',
self._XML_NAMESPACES)
for failure in failures:
unit_test_errors.append({
unit_test_errors = [
{
'class': failure.find('mt:name', self._XML_NAMESPACES).text,
'method': failure.find('mt:methodName',
self._XML_NAMESPACES).text,
'message': failure.find('mt:message',
self._XML_NAMESPACES).text,
'stack_trace': failure.find('mt:stackTrace',
self._XML_NAMESPACES).text
})
}
for failure in result.findall(
'mt:details/mt:runTestResult/mt:failures',
self._XML_NAMESPACES)
]

deployment_detail = {
'total_count': result.find('mt:numberComponentsTotal',
Expand Down Expand Up @@ -543,14 +545,14 @@ def retrieve_zip(self, async_process_id, **kwargs):
error_message = error_message.text

# Check if there are any messages
messages = []
message_list = result.findall('mt:details/mt:messages',
self._XML_NAMESPACES)
for message in message_list:
messages.append({
messages = [
{
'file': message.find('mt:fileName', self._XML_NAMESPACES).text,
'message': message.find('mt:problem', self._XML_NAMESPACES).text
})
}
for message in result.findall('mt:details/mt:messages',
self._XML_NAMESPACES)
]

# Retrieve base64 encoded ZIP file
zipfile_base64 = result.find('mt:zipFile', self._XML_NAMESPACES).text
Expand All @@ -568,13 +570,13 @@ def check_retrieve_status(self, async_process_id, **kwargs):
error_message = error_message.text

# Check if there are any messages
messages = []
message_list = result.findall('mt:details/mt:messages',
self._XML_NAMESPACES)
for message in message_list:
messages.append({
messages = [
{
'file': message.find('mt:fileName', self._XML_NAMESPACES).text,
'message': message.find('mt:problem', self._XML_NAMESPACES).text
})
}
for message in result.findall('mt:details/mt:messages',
self._XML_NAMESPACES)
]

return state, error_message, messages