Skip to content
This repository was archived by the owner on Aug 31, 2021. It is now read-only.
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
39 changes: 39 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
# LiveCode Thirdparty Libraries

The procedure for updating `thirdparty` on `develop-9.0` and `develop` branches is
slightly different due to `thirdparty` prebuilts on develop.

## Update procedure for develop-9.0

1. Push up a PR to this repo
2. If there is a related patch to another repos then push up a PR for those
3. Create a test branch on `livecode` with updated submodule ptrs. Ensure the
branch is pushed upstream rather than to your clone.
4. If it was a CEF update then push up a branch to `livecode-private` again with
updated submodule and kick off prebuilts on vulcan.
5. Create a PR with the title beginning with `[[ NO MERGE ]]` and review it ok
to confirm it passes on travis and vulcan
6. Close PR from 5 and link to it in the other related PRs
7. Once all related PRs are reviewed merge `thirdparty`, update submodules in
`livecode` and merge any related PRs

## Update procedure for develop

The same procedure applies to merge-ups from `develop-9.0` because the
`thirdparty` prebuilts must be rebuilt for the current head of the `develop`
branch.

1. Push up a PR to this repo
2. If there is a related patch to another repos then push up a PR for those
3. Review and merge the PR for `thirdparty` but *do not* update submodule ptrs
in `livecode` yet.
4. Create a test branch on `livecode` with updated submodule ptrs. Ensure the
branch is pushed upstream rather than to your clone.
5. Push up a branch to `livecode-private` again with updated submodule and kick
off prebuilts on vulcan.
6. Once prebuilts are built for the `develop` head of `thirdparty` create a PR
using the `livecode` branch with the title beginning with `[[ NO MERGE ]]` and
review it ok to confirm it passes on travis and vulcan
7. Close PR from 6 and link to it in the other related PRs
8. Once all related PRs are reviewed, update submodules in `livecode` and merge
any related PRs
11 changes: 8 additions & 3 deletions libcef/UPDATE.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,19 @@
# Steps for updating CEF

- Go to http://opensource.spotify.com/cefbuilds/index.html
- Download the latest standard distribution for Windows or Linux and extract
- Download the latest standard distribution for Windows and Linux and extract
- Delete `thirdparty/libcef/include` and `thirdparty/libcef/libcef_dll`
- Copy `include` and `libcef_dll` from the CEF build to `thirdparty/libcef`
- CD to `thirdparty/libcef`
- Copy `include` and `libcef_dll` from the CEF windows build to
`thirdparty/libcef`
- Execute `python gen_libcef_paths.py libcef_dll > libcef.gypi`
- Execute `python gen_libcef_stubs.py include > libcef.stubs`
- Stderr will list unknown parameter or return types so edit `gen_libcef_stubs.py`
to add them and run again
- Copy `include` from the CEF linux build to `thirdparty/libcef` to merge in
the extra headers required for linux. Note this is done after the stubs are
generated so we don't include stubs for anything linux specific which seems to
be `cef_get_xdisplay` at the moment.
- CD to `thirdparty/libcef`
- Copy binaries and resources from the CEF build to the prebuilt folders checking
for any new file names to update in the gyp files and prebuilt builder. Note copy
the binaries from the Release folder even to the debug prebuilt folders.
Expand Down
325 changes: 325 additions & 0 deletions libcef/include/base/internal/cef_atomicops_arm_gcc.h
Original file line number Diff line number Diff line change
@@ -0,0 +1,325 @@
// Copyright (c) 2013 Google Inc. All rights reserved.
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the name Chromium Embedded
// Framework nor the names of its contributors may be used to endorse
// or promote products derived from this software without specific prior
// written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
//
// Do not include this header file directly. Use base/cef_atomicops.h
// instead.
//
// LinuxKernelCmpxchg and Barrier_AtomicIncrement are from Google Gears.

#ifndef CEF_INCLUDE_BASE_INTERNAL_CEF_ATOMICOPS_ARM_GCC_H_
#define CEF_INCLUDE_BASE_INTERNAL_CEF_ATOMICOPS_ARM_GCC_H_

#if defined(OS_QNX)
#include <sys/cpuinline.h>
#endif

namespace base {
namespace subtle {

// Memory barriers on ARM are funky, but the kernel is here to help:
//
// * ARMv5 didn't support SMP, there is no memory barrier instruction at
// all on this architecture, or when targeting its machine code.
//
// * Some ARMv6 CPUs support SMP. A full memory barrier can be produced by
// writing a random value to a very specific coprocessor register.
//
// * On ARMv7, the "dmb" instruction is used to perform a full memory
// barrier (though writing to the co-processor will still work).
// However, on single core devices (e.g. Nexus One, or Nexus S),
// this instruction will take up to 200 ns, which is huge, even though
// it's completely un-needed on these devices.
//
// * There is no easy way to determine at runtime if the device is
// single or multi-core. However, the kernel provides a useful helper
// function at a fixed memory address (0xffff0fa0), which will always
// perform a memory barrier in the most efficient way. I.e. on single
// core devices, this is an empty function that exits immediately.
// On multi-core devices, it implements a full memory barrier.
//
// * This source could be compiled to ARMv5 machine code that runs on a
// multi-core ARMv6 or ARMv7 device. In this case, memory barriers
// are needed for correct execution. Always call the kernel helper, even
// when targeting ARMv5TE.
//

inline void MemoryBarrier() {
#if defined(OS_LINUX) || defined(OS_ANDROID)
// Note: This is a function call, which is also an implicit compiler barrier.
typedef void (*KernelMemoryBarrierFunc)();
((KernelMemoryBarrierFunc)0xffff0fa0)();
#elif defined(OS_QNX)
__cpu_membarrier();
#else
#error MemoryBarrier() is not implemented on this platform.
#endif
}

// An ARM toolchain would only define one of these depending on which
// variant of the target architecture is being used. This tests against
// any known ARMv6 or ARMv7 variant, where it is possible to directly
// use ldrex/strex instructions to implement fast atomic operations.
#if defined(__ARM_ARCH_7__) || defined(__ARM_ARCH_7A__) || \
defined(__ARM_ARCH_7R__) || defined(__ARM_ARCH_7M__) || \
defined(__ARM_ARCH_6__) || defined(__ARM_ARCH_6J__) || \
defined(__ARM_ARCH_6K__) || defined(__ARM_ARCH_6Z__) || \
defined(__ARM_ARCH_6ZK__) || defined(__ARM_ARCH_6T2__)

inline Atomic32 NoBarrier_CompareAndSwap(volatile Atomic32* ptr,
Atomic32 old_value,
Atomic32 new_value) {
Atomic32 prev_value;
int reloop;
do {
// The following is equivalent to:
//
// prev_value = LDREX(ptr)
// reloop = 0
// if (prev_value != old_value)
// reloop = STREX(ptr, new_value)
__asm__ __volatile__(
" ldrex %0, [%3]\n"
" mov %1, #0\n"
" cmp %0, %4\n"
#ifdef __thumb2__
" it eq\n"
#endif
" strexeq %1, %5, [%3]\n"
: "=&r"(prev_value), "=&r"(reloop), "+m"(*ptr)
: "r"(ptr), "r"(old_value), "r"(new_value)
: "cc", "memory");
} while (reloop != 0);
return prev_value;
}

inline Atomic32 Acquire_CompareAndSwap(volatile Atomic32* ptr,
Atomic32 old_value,
Atomic32 new_value) {
Atomic32 result = NoBarrier_CompareAndSwap(ptr, old_value, new_value);
MemoryBarrier();
return result;
}

inline Atomic32 Release_CompareAndSwap(volatile Atomic32* ptr,
Atomic32 old_value,
Atomic32 new_value) {
MemoryBarrier();
return NoBarrier_CompareAndSwap(ptr, old_value, new_value);
}

inline Atomic32 NoBarrier_AtomicIncrement(volatile Atomic32* ptr,
Atomic32 increment) {
Atomic32 value;
int reloop;
do {
// Equivalent to:
//
// value = LDREX(ptr)
// value += increment
// reloop = STREX(ptr, value)
//
__asm__ __volatile__(
" ldrex %0, [%3]\n"
" add %0, %0, %4\n"
" strex %1, %0, [%3]\n"
: "=&r"(value), "=&r"(reloop), "+m"(*ptr)
: "r"(ptr), "r"(increment)
: "cc", "memory");
} while (reloop);
return value;
}

inline Atomic32 Barrier_AtomicIncrement(volatile Atomic32* ptr,
Atomic32 increment) {
// TODO(digit): Investigate if it's possible to implement this with
// a single MemoryBarrier() operation between the LDREX and STREX.
// See http://crbug.com/246514
MemoryBarrier();
Atomic32 result = NoBarrier_AtomicIncrement(ptr, increment);
MemoryBarrier();
return result;
}

inline Atomic32 NoBarrier_AtomicExchange(volatile Atomic32* ptr,
Atomic32 new_value) {
Atomic32 old_value;
int reloop;
do {
// old_value = LDREX(ptr)
// reloop = STREX(ptr, new_value)
__asm__ __volatile__(
" ldrex %0, [%3]\n"
" strex %1, %4, [%3]\n"
: "=&r"(old_value), "=&r"(reloop), "+m"(*ptr)
: "r"(ptr), "r"(new_value)
: "cc", "memory");
} while (reloop != 0);
return old_value;
}

// This tests against any known ARMv5 variant.
#elif defined(__ARM_ARCH_5__) || defined(__ARM_ARCH_5T__) || \
defined(__ARM_ARCH_5TE__) || defined(__ARM_ARCH_5TEJ__)

// The kernel also provides a helper function to perform an atomic
// compare-and-swap operation at the hard-wired address 0xffff0fc0.
// On ARMv5, this is implemented by a special code path that the kernel
// detects and treats specially when thread pre-emption happens.
// On ARMv6 and higher, it uses LDREX/STREX instructions instead.
//
// Note that this always perform a full memory barrier, there is no
// need to add calls MemoryBarrier() before or after it. It also
// returns 0 on success, and 1 on exit.
//
// Available and reliable since Linux 2.6.24. Both Android and ChromeOS
// use newer kernel revisions, so this should not be a concern.
namespace {

inline int LinuxKernelCmpxchg(Atomic32 old_value,
Atomic32 new_value,
volatile Atomic32* ptr) {
typedef int (*KernelCmpxchgFunc)(Atomic32, Atomic32, volatile Atomic32*);
return ((KernelCmpxchgFunc)0xffff0fc0)(old_value, new_value, ptr);
}

} // namespace

inline Atomic32 NoBarrier_CompareAndSwap(volatile Atomic32* ptr,
Atomic32 old_value,
Atomic32 new_value) {
Atomic32 prev_value;
for (;;) {
prev_value = *ptr;
if (prev_value != old_value)
return prev_value;
if (!LinuxKernelCmpxchg(old_value, new_value, ptr))
return old_value;
}
}

inline Atomic32 NoBarrier_AtomicExchange(volatile Atomic32* ptr,
Atomic32 new_value) {
Atomic32 old_value;
do {
old_value = *ptr;
} while (LinuxKernelCmpxchg(old_value, new_value, ptr));
return old_value;
}

inline Atomic32 NoBarrier_AtomicIncrement(volatile Atomic32* ptr,
Atomic32 increment) {
return Barrier_AtomicIncrement(ptr, increment);
}

inline Atomic32 Barrier_AtomicIncrement(volatile Atomic32* ptr,
Atomic32 increment) {
for (;;) {
// Atomic exchange the old value with an incremented one.
Atomic32 old_value = *ptr;
Atomic32 new_value = old_value + increment;
if (!LinuxKernelCmpxchg(old_value, new_value, ptr)) {
// The exchange took place as expected.
return new_value;
}
// Otherwise, *ptr changed mid-loop and we need to retry.
}
}

inline Atomic32 Acquire_CompareAndSwap(volatile Atomic32* ptr,
Atomic32 old_value,
Atomic32 new_value) {
Atomic32 prev_value;
for (;;) {
prev_value = *ptr;
if (prev_value != old_value) {
// Always ensure acquire semantics.
MemoryBarrier();
return prev_value;
}
if (!LinuxKernelCmpxchg(old_value, new_value, ptr))
return old_value;
}
}

inline Atomic32 Release_CompareAndSwap(volatile Atomic32* ptr,
Atomic32 old_value,
Atomic32 new_value) {
// This could be implemented as:
// MemoryBarrier();
// return NoBarrier_CompareAndSwap();
//
// But would use 3 barriers per succesful CAS. To save performance,
// use Acquire_CompareAndSwap(). Its implementation guarantees that:
// - A succesful swap uses only 2 barriers (in the kernel helper).
// - An early return due to (prev_value != old_value) performs
// a memory barrier with no store, which is equivalent to the
// generic implementation above.
return Acquire_CompareAndSwap(ptr, old_value, new_value);
}

#else
#error "Your CPU's ARM architecture is not supported yet"
#endif

// NOTE: Atomicity of the following load and store operations is only
// guaranteed in case of 32-bit alignement of |ptr| values.

inline void NoBarrier_Store(volatile Atomic32* ptr, Atomic32 value) {
*ptr = value;
}

inline void Acquire_Store(volatile Atomic32* ptr, Atomic32 value) {
*ptr = value;
MemoryBarrier();
}

inline void Release_Store(volatile Atomic32* ptr, Atomic32 value) {
MemoryBarrier();
*ptr = value;
}

inline Atomic32 NoBarrier_Load(volatile const Atomic32* ptr) {
return *ptr;
}

inline Atomic32 Acquire_Load(volatile const Atomic32* ptr) {
Atomic32 value = *ptr;
MemoryBarrier();
return value;
}

inline Atomic32 Release_Load(volatile const Atomic32* ptr) {
MemoryBarrier();
return *ptr;
}

} // namespace base::subtle
} // namespace base

#endif // CEF_INCLUDE_BASE_INTERNAL_CEF_ATOMICOPS_ARM_GCC_H_
Loading